00:00:00.000 Started by upstream project "autotest-per-patch" build number 122879 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.034 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.051 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.074 Using shallow fetch with depth 1 00:00:00.074 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.074 > git --version # timeout=10 00:00:00.087 > git --version # 'git version 2.39.2' 00:00:00.087 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.088 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.088 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.590 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.600 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.612 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:06.612 > git config core.sparsecheckout # timeout=10 00:00:06.623 > git read-tree -mu HEAD # timeout=10 00:00:06.638 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:06.654 Commit message: "inventory/dev: add missing long names" 00:00:06.654 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:06.770 [Pipeline] Start of Pipeline 00:00:06.780 [Pipeline] library 00:00:06.781 Loading library shm_lib@master 00:00:06.781 Library shm_lib@master is cached. Copying from home. 00:00:06.792 [Pipeline] node 00:00:06.819 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.820 [Pipeline] { 00:00:06.826 [Pipeline] catchError 00:00:06.827 [Pipeline] { 00:00:06.836 [Pipeline] wrap 00:00:06.843 [Pipeline] { 00:00:06.849 [Pipeline] stage 00:00:06.850 [Pipeline] { (Prologue) 00:00:07.035 [Pipeline] sh 00:00:07.929 + logger -p user.info -t JENKINS-CI 00:00:07.953 [Pipeline] echo 00:00:07.955 Node: CYP9 00:00:07.964 [Pipeline] sh 00:00:08.326 [Pipeline] setCustomBuildProperty 00:00:08.338 [Pipeline] echo 00:00:08.340 Cleanup processes 00:00:08.345 [Pipeline] sh 00:00:08.645 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.645 6343 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.661 [Pipeline] sh 00:00:08.964 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.964 ++ grep -v 'sudo pgrep' 00:00:08.964 ++ awk '{print $1}' 00:00:08.964 + sudo kill -9 00:00:08.964 + true 00:00:08.987 [Pipeline] cleanWs 00:00:08.998 [WS-CLEANUP] Deleting project workspace... 00:00:08.998 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.012 [WS-CLEANUP] done 00:00:09.017 [Pipeline] setCustomBuildProperty 00:00:09.032 [Pipeline] sh 00:00:09.325 + sudo git config --global --replace-all safe.directory '*' 00:00:09.396 [Pipeline] nodesByLabel 00:00:09.397 Found a total of 1 nodes with the 'sorcerer' label 00:00:09.408 [Pipeline] httpRequest 00:00:09.703 HttpMethod: GET 00:00:09.703 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:10.689 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:10.981 Response Code: HTTP/1.1 200 OK 00:00:11.068 Success: Status code 200 is in the accepted range: 200,404 00:00:11.068 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:12.013 [Pipeline] sh 00:00:12.310 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:12.332 [Pipeline] httpRequest 00:00:12.338 HttpMethod: GET 00:00:12.338 URL: http://10.211.164.101/packages/spdk_7d4b198309e2cbb565c38250f14666192859554c.tar.gz 00:00:12.341 Sending request to url: http://10.211.164.101/packages/spdk_7d4b198309e2cbb565c38250f14666192859554c.tar.gz 00:00:12.354 Response Code: HTTP/1.1 200 OK 00:00:12.355 Success: Status code 200 is in the accepted range: 200,404 00:00:12.355 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7d4b198309e2cbb565c38250f14666192859554c.tar.gz 00:00:32.872 [Pipeline] sh 00:00:33.175 + tar --no-same-owner -xf spdk_7d4b198309e2cbb565c38250f14666192859554c.tar.gz 00:00:36.500 [Pipeline] sh 00:00:36.793 + git -C spdk log --oneline -n5 00:00:36.793 7d4b19830 lib/idxd: DIF strip DSA implementation 00:00:36.793 913aa023f test/accel: DIF verify and generate copy accel functional tests refactor 00:00:36.793 0008c8571 test/accel: DIF verify copy accel functional tests 00:00:36.793 ea11a8089 examples/accel: DIF verify copy accel perf tests 00:00:36.793 4b43b7c22 lib/accel: DIF verify copy accel SW implementation 00:00:36.804 [Pipeline] } 00:00:36.817 [Pipeline] // stage 00:00:36.824 [Pipeline] stage 00:00:36.826 [Pipeline] { (Prepare) 00:00:36.842 [Pipeline] writeFile 00:00:36.856 [Pipeline] sh 00:00:37.146 + logger -p user.info -t JENKINS-CI 00:00:37.162 [Pipeline] sh 00:00:37.455 + logger -p user.info -t JENKINS-CI 00:00:37.469 [Pipeline] sh 00:00:37.761 + cat autorun-spdk.conf 00:00:37.761 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.761 SPDK_TEST_NVMF=1 00:00:37.761 SPDK_TEST_NVME_CLI=1 00:00:37.761 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.761 SPDK_TEST_NVMF_NICS=e810 00:00:37.761 SPDK_TEST_VFIOUSER=1 00:00:37.761 SPDK_RUN_UBSAN=1 00:00:37.761 NET_TYPE=phy 00:00:37.771 RUN_NIGHTLY=0 00:00:37.776 [Pipeline] readFile 00:00:37.825 [Pipeline] withEnv 00:00:37.827 [Pipeline] { 00:00:37.842 [Pipeline] sh 00:00:38.136 + set -ex 00:00:38.136 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:38.136 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.136 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.136 ++ SPDK_TEST_NVMF=1 00:00:38.136 ++ SPDK_TEST_NVME_CLI=1 00:00:38.136 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.136 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.136 ++ SPDK_TEST_VFIOUSER=1 00:00:38.136 ++ SPDK_RUN_UBSAN=1 00:00:38.136 ++ NET_TYPE=phy 00:00:38.136 ++ RUN_NIGHTLY=0 00:00:38.136 + case $SPDK_TEST_NVMF_NICS in 00:00:38.136 + DRIVERS=ice 00:00:38.136 + [[ tcp == \r\d\m\a ]] 00:00:38.136 + [[ -n ice ]] 00:00:38.136 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:38.136 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:44.741 rmmod: ERROR: Module irdma is not currently loaded 00:00:44.741 rmmod: ERROR: Module i40iw is not currently loaded 00:00:44.741 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:44.741 + true 00:00:44.741 + for D in $DRIVERS 00:00:44.741 + sudo modprobe ice 00:00:44.741 + exit 0 00:00:44.754 [Pipeline] } 00:00:44.773 [Pipeline] // withEnv 00:00:44.780 [Pipeline] } 00:00:44.801 [Pipeline] // stage 00:00:44.810 [Pipeline] catchError 00:00:44.812 [Pipeline] { 00:00:44.829 [Pipeline] timeout 00:00:44.829 Timeout set to expire in 40 min 00:00:44.831 [Pipeline] { 00:00:44.847 [Pipeline] stage 00:00:44.849 [Pipeline] { (Tests) 00:00:44.864 [Pipeline] sh 00:00:45.163 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.163 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.163 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.163 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:45.163 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.163 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.163 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:45.163 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.163 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.163 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.163 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.163 + source /etc/os-release 00:00:45.163 ++ NAME='Fedora Linux' 00:00:45.163 ++ VERSION='38 (Cloud Edition)' 00:00:45.163 ++ ID=fedora 00:00:45.163 ++ VERSION_ID=38 00:00:45.163 ++ VERSION_CODENAME= 00:00:45.163 ++ PLATFORM_ID=platform:f38 00:00:45.163 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.163 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.163 ++ LOGO=fedora-logo-icon 00:00:45.163 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.163 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.163 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.163 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.163 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.163 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.163 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.163 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.163 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.163 ++ SUPPORT_END=2024-05-14 00:00:45.163 ++ VARIANT='Cloud Edition' 00:00:45.163 ++ VARIANT_ID=cloud 00:00:45.163 + uname -a 00:00:45.163 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.163 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:48.474 Hugepages 00:00:48.474 node hugesize free / total 00:00:48.474 node0 1048576kB 0 / 0 00:00:48.474 node0 2048kB 0 / 0 00:00:48.474 node1 1048576kB 0 / 0 00:00:48.474 node1 2048kB 0 / 0 00:00:48.474 00:00:48.474 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.474 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:48.474 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:48.474 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:48.474 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:48.474 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:48.474 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:48.474 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:48.474 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:48.474 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:48.474 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:48.474 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:48.474 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:48.474 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:48.474 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:48.474 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:48.474 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:48.474 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:48.474 + rm -f /tmp/spdk-ld-path 00:00:48.474 + source autorun-spdk.conf 00:00:48.474 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.474 ++ SPDK_TEST_NVMF=1 00:00:48.474 ++ SPDK_TEST_NVME_CLI=1 00:00:48.474 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.474 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.474 ++ SPDK_TEST_VFIOUSER=1 00:00:48.474 ++ SPDK_RUN_UBSAN=1 00:00:48.474 ++ NET_TYPE=phy 00:00:48.474 ++ RUN_NIGHTLY=0 00:00:48.474 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.474 + [[ -n '' ]] 00:00:48.474 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.474 + for M in /var/spdk/build-*-manifest.txt 00:00:48.474 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.474 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.474 + for M in /var/spdk/build-*-manifest.txt 00:00:48.474 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.474 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.474 ++ uname 00:00:48.474 + [[ Linux == \L\i\n\u\x ]] 00:00:48.474 + sudo dmesg -T 00:00:48.474 + sudo dmesg --clear 00:00:48.474 + dmesg_pid=7297 00:00:48.474 + [[ Fedora Linux == FreeBSD ]] 00:00:48.474 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.474 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.474 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.474 + sudo dmesg -Tw 00:00:48.474 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.474 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.474 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.474 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.474 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.474 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.474 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.474 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.474 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.474 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.474 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.474 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.474 Test configuration: 00:00:48.474 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.474 SPDK_TEST_NVMF=1 00:00:48.474 SPDK_TEST_NVME_CLI=1 00:00:48.474 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.474 SPDK_TEST_NVMF_NICS=e810 00:00:48.474 SPDK_TEST_VFIOUSER=1 00:00:48.474 SPDK_RUN_UBSAN=1 00:00:48.474 NET_TYPE=phy 00:00:48.474 RUN_NIGHTLY=0 10:46:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.474 10:46:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.474 10:46:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.474 10:46:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.474 10:46:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.474 10:46:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.474 10:46:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.474 10:46:44 -- paths/export.sh@5 -- $ export PATH 00:00:48.474 10:46:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.474 10:46:44 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.474 10:46:44 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:48.474 10:46:44 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715762804.XXXXXX 00:00:48.474 10:46:44 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715762804.paqOhz 00:00:48.474 10:46:44 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:48.474 10:46:44 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:48.474 10:46:44 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:48.474 10:46:44 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.475 10:46:44 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.475 10:46:44 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:48.475 10:46:44 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:48.475 10:46:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.475 10:46:45 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:48.475 10:46:45 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:48.475 10:46:45 -- pm/common@17 -- $ local monitor 00:00:48.475 10:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.475 10:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.475 10:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.475 10:46:45 -- pm/common@21 -- $ date +%s 00:00:48.475 10:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.475 10:46:45 -- pm/common@21 -- $ date +%s 00:00:48.475 10:46:45 -- pm/common@25 -- $ sleep 1 00:00:48.475 10:46:45 -- pm/common@21 -- $ date +%s 00:00:48.475 10:46:45 -- pm/common@21 -- $ date +%s 00:00:48.475 10:46:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762805 00:00:48.475 10:46:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762805 00:00:48.475 10:46:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762805 00:00:48.475 10:46:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762805 00:00:48.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762805_collect-vmstat.pm.log 00:00:48.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762805_collect-cpu-load.pm.log 00:00:48.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762805_collect-cpu-temp.pm.log 00:00:48.475 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762805_collect-bmc-pm.bmc.pm.log 00:00:49.421 10:46:46 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:49.421 10:46:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.421 10:46:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.421 10:46:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.421 10:46:46 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.421 Wed May 15 08:46:46 AM UTC 2024 00:00:49.421 10:46:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.421 v24.05-pre-562-g7d4b19830 00:00:49.421 10:46:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.421 10:46:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.421 10:46:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.421 10:46:46 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:49.421 10:46:46 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:49.421 10:46:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.684 ************************************ 00:00:49.684 START TEST ubsan 00:00:49.684 ************************************ 00:00:49.684 10:46:46 -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:49.684 using ubsan 00:00:49.684 00:00:49.684 real 0m0.000s 00:00:49.684 user 0m0.000s 00:00:49.684 sys 0m0.000s 00:00:49.684 10:46:46 -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:49.684 10:46:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.684 ************************************ 00:00:49.684 END TEST ubsan 00:00:49.684 ************************************ 00:00:49.684 10:46:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.684 10:46:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.684 10:46:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.684 10:46:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.684 10:46:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.684 10:46:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.684 10:46:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.684 10:46:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.684 10:46:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:50.259 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:50.259 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:51.204 Using 'verbs' RDMA provider 00:01:10.284 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:22.534 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:22.534 Creating mk/config.mk...done. 00:01:22.534 Creating mk/cc.flags.mk...done. 00:01:22.534 Type 'make' to build. 00:01:22.534 10:47:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:22.534 10:47:18 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:22.534 10:47:18 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:22.534 10:47:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.534 ************************************ 00:01:22.534 START TEST make 00:01:22.534 ************************************ 00:01:22.534 10:47:18 -- common/autotest_common.sh@1121 -- $ make -j144 00:01:22.534 make[1]: Nothing to be done for 'all'. 00:01:24.456 The Meson build system 00:01:24.456 Version: 1.3.1 00:01:24.456 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:24.456 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.456 Build type: native build 00:01:24.456 Project name: libvfio-user 00:01:24.456 Project version: 0.0.1 00:01:24.456 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:24.456 C linker for the host machine: cc ld.bfd 2.39-16 00:01:24.456 Host machine cpu family: x86_64 00:01:24.456 Host machine cpu: x86_64 00:01:24.456 Run-time dependency threads found: YES 00:01:24.456 Library dl found: YES 00:01:24.456 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:24.456 Run-time dependency json-c found: YES 0.17 00:01:24.456 Run-time dependency cmocka found: YES 1.1.7 00:01:24.456 Program pytest-3 found: NO 00:01:24.456 Program flake8 found: NO 00:01:24.456 Program misspell-fixer found: NO 00:01:24.456 Program restructuredtext-lint found: NO 00:01:24.456 Program valgrind found: YES (/usr/bin/valgrind) 00:01:24.456 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.456 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.456 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.456 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.456 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:24.456 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:24.456 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.456 Build targets in project: 8 00:01:24.456 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:24.456 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:24.456 00:01:24.456 libvfio-user 0.0.1 00:01:24.456 00:01:24.456 User defined options 00:01:24.456 buildtype : debug 00:01:24.456 default_library: shared 00:01:24.456 libdir : /usr/local/lib 00:01:24.456 00:01:24.456 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.719 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:24.719 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:24.719 [2/37] Compiling C object samples/null.p/null.c.o 00:01:24.719 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:24.719 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:24.719 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:24.719 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:24.719 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:24.719 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:24.719 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:24.719 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:24.719 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:24.719 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:24.719 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:24.719 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:24.719 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:24.719 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:24.719 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:24.719 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:24.719 [19/37] Compiling C object samples/server.p/server.c.o 00:01:24.719 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:24.719 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:24.719 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:24.719 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:24.719 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:24.719 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:24.719 [26/37] Compiling C object samples/client.p/client.c.o 00:01:24.719 [27/37] Linking target samples/client 00:01:24.719 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:24.982 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:24.982 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:24.982 [31/37] Linking target test/unit_tests 00:01:24.982 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:24.982 [33/37] Linking target samples/gpio-pci-idio-16 00:01:24.982 [34/37] Linking target samples/lspci 00:01:24.982 [35/37] Linking target samples/null 00:01:24.982 [36/37] Linking target samples/server 00:01:24.982 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:24.982 INFO: autodetecting backend as ninja 00:01:24.982 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.244 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.505 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.505 ninja: no work to do. 00:01:30.807 The Meson build system 00:01:30.807 Version: 1.3.1 00:01:30.807 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:30.807 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:30.807 Build type: native build 00:01:30.807 Program cat found: YES (/usr/bin/cat) 00:01:30.807 Project name: DPDK 00:01:30.807 Project version: 23.11.0 00:01:30.807 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:30.807 C linker for the host machine: cc ld.bfd 2.39-16 00:01:30.807 Host machine cpu family: x86_64 00:01:30.807 Host machine cpu: x86_64 00:01:30.807 Message: ## Building in Developer Mode ## 00:01:30.807 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:30.807 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:30.807 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:30.807 Program python3 found: YES (/usr/bin/python3) 00:01:30.807 Program cat found: YES (/usr/bin/cat) 00:01:30.807 Compiler for C supports arguments -march=native: YES 00:01:30.807 Checking for size of "void *" : 8 00:01:30.807 Checking for size of "void *" : 8 (cached) 00:01:30.807 Library m found: YES 00:01:30.807 Library numa found: YES 00:01:30.807 Has header "numaif.h" : YES 00:01:30.807 Library fdt found: NO 00:01:30.807 Library execinfo found: NO 00:01:30.807 Has header "execinfo.h" : YES 00:01:30.807 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:30.807 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:30.807 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:30.807 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:30.807 Run-time dependency openssl found: YES 3.0.9 00:01:30.807 Run-time dependency libpcap found: YES 1.10.4 00:01:30.807 Has header "pcap.h" with dependency libpcap: YES 00:01:30.807 Compiler for C supports arguments -Wcast-qual: YES 00:01:30.807 Compiler for C supports arguments -Wdeprecated: YES 00:01:30.807 Compiler for C supports arguments -Wformat: YES 00:01:30.807 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:30.807 Compiler for C supports arguments -Wformat-security: NO 00:01:30.807 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.807 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:30.807 Compiler for C supports arguments -Wnested-externs: YES 00:01:30.807 Compiler for C supports arguments -Wold-style-definition: YES 00:01:30.807 Compiler for C supports arguments -Wpointer-arith: YES 00:01:30.807 Compiler for C supports arguments -Wsign-compare: YES 00:01:30.807 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:30.807 Compiler for C supports arguments -Wundef: YES 00:01:30.807 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.807 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:30.807 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:30.807 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.807 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:30.807 Program objdump found: YES (/usr/bin/objdump) 00:01:30.807 Compiler for C supports arguments -mavx512f: YES 00:01:30.807 Checking if "AVX512 checking" compiles: YES 00:01:30.807 Fetching value of define "__SSE4_2__" : 1 00:01:30.807 Fetching value of define "__AES__" : 1 00:01:30.807 Fetching value of define "__AVX__" : 1 00:01:30.807 Fetching value of define "__AVX2__" : 1 00:01:30.807 Fetching value of define "__AVX512BW__" : 1 00:01:30.807 Fetching value of define "__AVX512CD__" : 1 00:01:30.807 Fetching value of define "__AVX512DQ__" : 1 00:01:30.807 Fetching value of define "__AVX512F__" : 1 00:01:30.808 Fetching value of define "__AVX512VL__" : 1 00:01:30.808 Fetching value of define "__PCLMUL__" : 1 00:01:30.808 Fetching value of define "__RDRND__" : 1 00:01:30.808 Fetching value of define "__RDSEED__" : 1 00:01:30.808 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:30.808 Fetching value of define "__znver1__" : (undefined) 00:01:30.808 Fetching value of define "__znver2__" : (undefined) 00:01:30.808 Fetching value of define "__znver3__" : (undefined) 00:01:30.808 Fetching value of define "__znver4__" : (undefined) 00:01:30.808 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:30.808 Message: lib/log: Defining dependency "log" 00:01:30.808 Message: lib/kvargs: Defining dependency "kvargs" 00:01:30.808 Message: lib/telemetry: Defining dependency "telemetry" 00:01:30.808 Checking for function "getentropy" : NO 00:01:30.808 Message: lib/eal: Defining dependency "eal" 00:01:30.808 Message: lib/ring: Defining dependency "ring" 00:01:30.808 Message: lib/rcu: Defining dependency "rcu" 00:01:30.808 Message: lib/mempool: Defining dependency "mempool" 00:01:30.808 Message: lib/mbuf: Defining dependency "mbuf" 00:01:30.808 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:30.808 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:30.808 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:30.808 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:30.808 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:30.808 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:30.808 Compiler for C supports arguments -mpclmul: YES 00:01:30.808 Compiler for C supports arguments -maes: YES 00:01:30.808 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.808 Compiler for C supports arguments -mavx512bw: YES 00:01:30.808 Compiler for C supports arguments -mavx512dq: YES 00:01:30.808 Compiler for C supports arguments -mavx512vl: YES 00:01:30.808 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:30.808 Compiler for C supports arguments -mavx2: YES 00:01:30.808 Compiler for C supports arguments -mavx: YES 00:01:30.808 Message: lib/net: Defining dependency "net" 00:01:30.808 Message: lib/meter: Defining dependency "meter" 00:01:30.808 Message: lib/ethdev: Defining dependency "ethdev" 00:01:30.808 Message: lib/pci: Defining dependency "pci" 00:01:30.808 Message: lib/cmdline: Defining dependency "cmdline" 00:01:30.808 Message: lib/hash: Defining dependency "hash" 00:01:30.808 Message: lib/timer: Defining dependency "timer" 00:01:30.808 Message: lib/compressdev: Defining dependency "compressdev" 00:01:30.808 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:30.808 Message: lib/dmadev: Defining dependency "dmadev" 00:01:30.808 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:30.808 Message: lib/power: Defining dependency "power" 00:01:30.808 Message: lib/reorder: Defining dependency "reorder" 00:01:30.808 Message: lib/security: Defining dependency "security" 00:01:30.808 Has header "linux/userfaultfd.h" : YES 00:01:30.808 Has header "linux/vduse.h" : YES 00:01:30.808 Message: lib/vhost: Defining dependency "vhost" 00:01:30.808 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:30.808 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:30.808 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:30.808 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:30.808 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:30.808 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:30.808 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:30.808 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:30.808 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:30.808 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:30.808 Program doxygen found: YES (/usr/bin/doxygen) 00:01:30.808 Configuring doxy-api-html.conf using configuration 00:01:30.808 Configuring doxy-api-man.conf using configuration 00:01:30.808 Program mandb found: YES (/usr/bin/mandb) 00:01:30.808 Program sphinx-build found: NO 00:01:30.808 Configuring rte_build_config.h using configuration 00:01:30.808 Message: 00:01:30.808 ================= 00:01:30.808 Applications Enabled 00:01:30.808 ================= 00:01:30.808 00:01:30.808 apps: 00:01:30.808 00:01:30.808 00:01:30.808 Message: 00:01:30.808 ================= 00:01:30.808 Libraries Enabled 00:01:30.808 ================= 00:01:30.808 00:01:30.808 libs: 00:01:30.808 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:30.808 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:30.808 cryptodev, dmadev, power, reorder, security, vhost, 00:01:30.808 00:01:30.808 Message: 00:01:30.808 =============== 00:01:30.808 Drivers Enabled 00:01:30.808 =============== 00:01:30.808 00:01:30.808 common: 00:01:30.808 00:01:30.808 bus: 00:01:30.808 pci, vdev, 00:01:30.808 mempool: 00:01:30.808 ring, 00:01:30.808 dma: 00:01:30.808 00:01:30.808 net: 00:01:30.808 00:01:30.808 crypto: 00:01:30.808 00:01:30.808 compress: 00:01:30.808 00:01:30.808 vdpa: 00:01:30.808 00:01:30.808 00:01:30.808 Message: 00:01:30.808 ================= 00:01:30.808 Content Skipped 00:01:30.808 ================= 00:01:30.808 00:01:30.808 apps: 00:01:30.808 dumpcap: explicitly disabled via build config 00:01:30.808 graph: explicitly disabled via build config 00:01:30.808 pdump: explicitly disabled via build config 00:01:30.808 proc-info: explicitly disabled via build config 00:01:30.808 test-acl: explicitly disabled via build config 00:01:30.808 test-bbdev: explicitly disabled via build config 00:01:30.808 test-cmdline: explicitly disabled via build config 00:01:30.808 test-compress-perf: explicitly disabled via build config 00:01:30.808 test-crypto-perf: explicitly disabled via build config 00:01:30.808 test-dma-perf: explicitly disabled via build config 00:01:30.808 test-eventdev: explicitly disabled via build config 00:01:30.808 test-fib: explicitly disabled via build config 00:01:30.808 test-flow-perf: explicitly disabled via build config 00:01:30.808 test-gpudev: explicitly disabled via build config 00:01:30.808 test-mldev: explicitly disabled via build config 00:01:30.808 test-pipeline: explicitly disabled via build config 00:01:30.808 test-pmd: explicitly disabled via build config 00:01:30.808 test-regex: explicitly disabled via build config 00:01:30.808 test-sad: explicitly disabled via build config 00:01:30.808 test-security-perf: explicitly disabled via build config 00:01:30.808 00:01:30.808 libs: 00:01:30.808 metrics: explicitly disabled via build config 00:01:30.808 acl: explicitly disabled via build config 00:01:30.808 bbdev: explicitly disabled via build config 00:01:30.808 bitratestats: explicitly disabled via build config 00:01:30.808 bpf: explicitly disabled via build config 00:01:30.808 cfgfile: explicitly disabled via build config 00:01:30.808 distributor: explicitly disabled via build config 00:01:30.808 efd: explicitly disabled via build config 00:01:30.808 eventdev: explicitly disabled via build config 00:01:30.808 dispatcher: explicitly disabled via build config 00:01:30.808 gpudev: explicitly disabled via build config 00:01:30.808 gro: explicitly disabled via build config 00:01:30.808 gso: explicitly disabled via build config 00:01:30.808 ip_frag: explicitly disabled via build config 00:01:30.808 jobstats: explicitly disabled via build config 00:01:30.808 latencystats: explicitly disabled via build config 00:01:30.808 lpm: explicitly disabled via build config 00:01:30.808 member: explicitly disabled via build config 00:01:30.808 pcapng: explicitly disabled via build config 00:01:30.808 rawdev: explicitly disabled via build config 00:01:30.808 regexdev: explicitly disabled via build config 00:01:30.808 mldev: explicitly disabled via build config 00:01:30.808 rib: explicitly disabled via build config 00:01:30.808 sched: explicitly disabled via build config 00:01:30.808 stack: explicitly disabled via build config 00:01:30.808 ipsec: explicitly disabled via build config 00:01:30.808 pdcp: explicitly disabled via build config 00:01:30.808 fib: explicitly disabled via build config 00:01:30.808 port: explicitly disabled via build config 00:01:30.808 pdump: explicitly disabled via build config 00:01:30.808 table: explicitly disabled via build config 00:01:30.808 pipeline: explicitly disabled via build config 00:01:30.808 graph: explicitly disabled via build config 00:01:30.808 node: explicitly disabled via build config 00:01:30.808 00:01:30.808 drivers: 00:01:30.808 common/cpt: not in enabled drivers build config 00:01:30.808 common/dpaax: not in enabled drivers build config 00:01:30.808 common/iavf: not in enabled drivers build config 00:01:30.808 common/idpf: not in enabled drivers build config 00:01:30.808 common/mvep: not in enabled drivers build config 00:01:30.808 common/octeontx: not in enabled drivers build config 00:01:30.808 bus/auxiliary: not in enabled drivers build config 00:01:30.808 bus/cdx: not in enabled drivers build config 00:01:30.808 bus/dpaa: not in enabled drivers build config 00:01:30.808 bus/fslmc: not in enabled drivers build config 00:01:30.808 bus/ifpga: not in enabled drivers build config 00:01:30.808 bus/platform: not in enabled drivers build config 00:01:30.808 bus/vmbus: not in enabled drivers build config 00:01:30.808 common/cnxk: not in enabled drivers build config 00:01:30.808 common/mlx5: not in enabled drivers build config 00:01:30.808 common/nfp: not in enabled drivers build config 00:01:30.808 common/qat: not in enabled drivers build config 00:01:30.808 common/sfc_efx: not in enabled drivers build config 00:01:30.808 mempool/bucket: not in enabled drivers build config 00:01:30.808 mempool/cnxk: not in enabled drivers build config 00:01:30.808 mempool/dpaa: not in enabled drivers build config 00:01:30.808 mempool/dpaa2: not in enabled drivers build config 00:01:30.808 mempool/octeontx: not in enabled drivers build config 00:01:30.808 mempool/stack: not in enabled drivers build config 00:01:30.808 dma/cnxk: not in enabled drivers build config 00:01:30.808 dma/dpaa: not in enabled drivers build config 00:01:30.808 dma/dpaa2: not in enabled drivers build config 00:01:30.808 dma/hisilicon: not in enabled drivers build config 00:01:30.808 dma/idxd: not in enabled drivers build config 00:01:30.808 dma/ioat: not in enabled drivers build config 00:01:30.808 dma/skeleton: not in enabled drivers build config 00:01:30.808 net/af_packet: not in enabled drivers build config 00:01:30.808 net/af_xdp: not in enabled drivers build config 00:01:30.808 net/ark: not in enabled drivers build config 00:01:30.808 net/atlantic: not in enabled drivers build config 00:01:30.808 net/avp: not in enabled drivers build config 00:01:30.808 net/axgbe: not in enabled drivers build config 00:01:30.808 net/bnx2x: not in enabled drivers build config 00:01:30.809 net/bnxt: not in enabled drivers build config 00:01:30.809 net/bonding: not in enabled drivers build config 00:01:30.809 net/cnxk: not in enabled drivers build config 00:01:30.809 net/cpfl: not in enabled drivers build config 00:01:30.809 net/cxgbe: not in enabled drivers build config 00:01:30.809 net/dpaa: not in enabled drivers build config 00:01:30.809 net/dpaa2: not in enabled drivers build config 00:01:30.809 net/e1000: not in enabled drivers build config 00:01:30.809 net/ena: not in enabled drivers build config 00:01:30.809 net/enetc: not in enabled drivers build config 00:01:30.809 net/enetfec: not in enabled drivers build config 00:01:30.809 net/enic: not in enabled drivers build config 00:01:30.809 net/failsafe: not in enabled drivers build config 00:01:30.809 net/fm10k: not in enabled drivers build config 00:01:30.809 net/gve: not in enabled drivers build config 00:01:30.809 net/hinic: not in enabled drivers build config 00:01:30.809 net/hns3: not in enabled drivers build config 00:01:30.809 net/i40e: not in enabled drivers build config 00:01:30.809 net/iavf: not in enabled drivers build config 00:01:30.809 net/ice: not in enabled drivers build config 00:01:30.809 net/idpf: not in enabled drivers build config 00:01:30.809 net/igc: not in enabled drivers build config 00:01:30.809 net/ionic: not in enabled drivers build config 00:01:30.809 net/ipn3ke: not in enabled drivers build config 00:01:30.809 net/ixgbe: not in enabled drivers build config 00:01:30.809 net/mana: not in enabled drivers build config 00:01:30.809 net/memif: not in enabled drivers build config 00:01:30.809 net/mlx4: not in enabled drivers build config 00:01:30.809 net/mlx5: not in enabled drivers build config 00:01:30.809 net/mvneta: not in enabled drivers build config 00:01:30.809 net/mvpp2: not in enabled drivers build config 00:01:30.809 net/netvsc: not in enabled drivers build config 00:01:30.809 net/nfb: not in enabled drivers build config 00:01:30.809 net/nfp: not in enabled drivers build config 00:01:30.809 net/ngbe: not in enabled drivers build config 00:01:30.809 net/null: not in enabled drivers build config 00:01:30.809 net/octeontx: not in enabled drivers build config 00:01:30.809 net/octeon_ep: not in enabled drivers build config 00:01:30.809 net/pcap: not in enabled drivers build config 00:01:30.809 net/pfe: not in enabled drivers build config 00:01:30.809 net/qede: not in enabled drivers build config 00:01:30.809 net/ring: not in enabled drivers build config 00:01:30.809 net/sfc: not in enabled drivers build config 00:01:30.809 net/softnic: not in enabled drivers build config 00:01:30.809 net/tap: not in enabled drivers build config 00:01:30.809 net/thunderx: not in enabled drivers build config 00:01:30.809 net/txgbe: not in enabled drivers build config 00:01:30.809 net/vdev_netvsc: not in enabled drivers build config 00:01:30.809 net/vhost: not in enabled drivers build config 00:01:30.809 net/virtio: not in enabled drivers build config 00:01:30.809 net/vmxnet3: not in enabled drivers build config 00:01:30.809 raw/*: missing internal dependency, "rawdev" 00:01:30.809 crypto/armv8: not in enabled drivers build config 00:01:30.809 crypto/bcmfs: not in enabled drivers build config 00:01:30.809 crypto/caam_jr: not in enabled drivers build config 00:01:30.809 crypto/ccp: not in enabled drivers build config 00:01:30.809 crypto/cnxk: not in enabled drivers build config 00:01:30.809 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.809 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.809 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.809 crypto/mlx5: not in enabled drivers build config 00:01:30.809 crypto/mvsam: not in enabled drivers build config 00:01:30.809 crypto/nitrox: not in enabled drivers build config 00:01:30.809 crypto/null: not in enabled drivers build config 00:01:30.809 crypto/octeontx: not in enabled drivers build config 00:01:30.809 crypto/openssl: not in enabled drivers build config 00:01:30.809 crypto/scheduler: not in enabled drivers build config 00:01:30.809 crypto/uadk: not in enabled drivers build config 00:01:30.809 crypto/virtio: not in enabled drivers build config 00:01:30.809 compress/isal: not in enabled drivers build config 00:01:30.809 compress/mlx5: not in enabled drivers build config 00:01:30.809 compress/octeontx: not in enabled drivers build config 00:01:30.809 compress/zlib: not in enabled drivers build config 00:01:30.809 regex/*: missing internal dependency, "regexdev" 00:01:30.809 ml/*: missing internal dependency, "mldev" 00:01:30.809 vdpa/ifc: not in enabled drivers build config 00:01:30.809 vdpa/mlx5: not in enabled drivers build config 00:01:30.809 vdpa/nfp: not in enabled drivers build config 00:01:30.809 vdpa/sfc: not in enabled drivers build config 00:01:30.809 event/*: missing internal dependency, "eventdev" 00:01:30.809 baseband/*: missing internal dependency, "bbdev" 00:01:30.809 gpu/*: missing internal dependency, "gpudev" 00:01:30.809 00:01:30.809 00:01:30.809 Build targets in project: 84 00:01:30.809 00:01:30.809 DPDK 23.11.0 00:01:30.809 00:01:30.809 User defined options 00:01:30.809 buildtype : debug 00:01:30.809 default_library : shared 00:01:30.809 libdir : lib 00:01:30.809 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.809 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:30.809 c_link_args : 00:01:30.809 cpu_instruction_set: native 00:01:30.809 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:30.809 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:30.809 enable_docs : false 00:01:30.809 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:30.809 enable_kmods : false 00:01:30.809 tests : false 00:01:30.809 00:01:30.809 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.809 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:31.076 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.076 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.076 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.076 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.076 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.076 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.076 [7/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.076 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.076 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.076 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.076 [11/264] Linking static target lib/librte_kvargs.a 00:01:31.076 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.076 [13/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.076 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.076 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.076 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.076 [17/264] Linking static target lib/librte_log.a 00:01:31.076 [18/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.342 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.342 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.342 [21/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.342 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.342 [23/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.342 [24/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.342 [25/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.342 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.342 [27/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.342 [28/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.342 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.342 [30/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.342 [31/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.342 [32/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.342 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.342 [34/264] Linking static target lib/librte_pci.a 00:01:31.342 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.342 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.342 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.342 [38/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.342 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.342 [40/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.342 [41/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.342 [42/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.342 [43/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:31.602 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.602 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.602 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.602 [47/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.602 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.602 [49/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.602 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.602 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.602 [52/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.602 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.602 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.602 [55/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.602 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.602 [57/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.602 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.602 [59/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.602 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.602 [61/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.602 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.602 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.602 [64/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.602 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.602 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.602 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.602 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.602 [69/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.602 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.602 [71/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.602 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.602 [73/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.602 [74/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.602 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.602 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.602 [77/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:31.602 [78/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.602 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.602 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.602 [81/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:31.602 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.602 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.602 [84/264] Linking static target lib/librte_timer.a 00:01:31.602 [85/264] Linking static target lib/librte_telemetry.a 00:01:31.602 [86/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:31.602 [87/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:31.602 [88/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.602 [89/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.602 [90/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:31.602 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.602 [92/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:31.602 [93/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.602 [94/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.602 [95/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.602 [96/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.602 [97/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.602 [98/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.603 [99/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.603 [100/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.603 [101/264] Linking static target lib/librte_ring.a 00:01:31.603 [102/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:31.603 [103/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.603 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.603 [105/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.603 [106/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:31.603 [107/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.603 [108/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:31.603 [109/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.603 [110/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:31.603 [111/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.603 [112/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:31.603 [113/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.603 [114/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.603 [115/264] Linking static target lib/librte_rcu.a 00:01:31.603 [116/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.603 [117/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.603 [118/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.863 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.863 [120/264] Linking static target lib/librte_meter.a 00:01:31.863 [121/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:31.864 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:31.864 [123/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:31.864 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.864 [125/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:31.864 [126/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:31.864 [127/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.864 [128/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.864 [129/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:31.864 [130/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:31.864 [131/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:31.864 [132/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.864 [133/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:31.864 [134/264] Linking static target lib/librte_net.a 00:01:31.864 [135/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.864 [136/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.864 [137/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.864 [138/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:31.864 [139/264] Linking static target lib/librte_dmadev.a 00:01:31.864 [140/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:31.864 [141/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.864 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.864 [143/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:31.864 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.864 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:31.864 [146/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:31.864 [147/264] Linking static target lib/librte_cmdline.a 00:01:31.864 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.864 [149/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:31.864 [150/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.864 [151/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.864 [152/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:31.864 [153/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:31.864 [154/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:31.864 [155/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:31.864 [156/264] Linking static target lib/librte_reorder.a 00:01:31.864 [157/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:31.864 [158/264] Linking target lib/librte_log.so.24.0 00:01:31.864 [159/264] Linking static target lib/librte_mempool.a 00:01:31.864 [160/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:31.864 [161/264] Linking static target lib/librte_power.a 00:01:31.864 [162/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.864 [163/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.864 [164/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:31.864 [165/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:31.864 [166/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:31.864 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.864 [168/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:31.864 [169/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:31.864 [170/264] Linking static target lib/librte_compressdev.a 00:01:31.864 [171/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:31.864 [172/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.864 [173/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:31.864 [174/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:31.864 [175/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.864 [176/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.864 [177/264] Linking static target lib/librte_eal.a 00:01:31.864 [178/264] Linking static target lib/librte_mbuf.a 00:01:31.864 [179/264] Linking static target lib/librte_security.a 00:01:31.864 [180/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:32.126 [181/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.126 [182/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.126 [183/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.126 [184/264] Linking static target drivers/librte_bus_vdev.a 00:01:32.126 [185/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:32.126 [186/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.126 [187/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:32.126 [188/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.126 [189/264] Linking static target lib/librte_hash.a 00:01:32.126 [190/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.126 [191/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.126 [192/264] Linking target lib/librte_kvargs.so.24.0 00:01:32.126 [193/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:32.126 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.126 [195/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.126 [196/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:32.126 [197/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.126 [198/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.126 [199/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.126 [200/264] Linking static target drivers/librte_mempool_ring.a 00:01:32.126 [201/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.126 [202/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.126 [203/264] Linking static target drivers/librte_bus_pci.a 00:01:32.126 [204/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:32.126 [205/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.388 [206/264] Linking target lib/librte_telemetry.so.24.0 00:01:32.388 [207/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.388 [208/264] Linking static target lib/librte_cryptodev.a 00:01:32.388 [209/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.388 [210/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.388 [211/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.388 [212/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:32.649 [213/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.649 [214/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.649 [215/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.649 [216/264] Linking static target lib/librte_ethdev.a 00:01:32.649 [217/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.911 [218/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.911 [219/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.911 [220/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.911 [221/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.911 [222/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.173 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.747 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:33.747 [225/264] Linking static target lib/librte_vhost.a 00:01:34.695 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.084 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.702 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.648 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.648 [230/264] Linking target lib/librte_eal.so.24.0 00:01:43.911 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:43.911 [232/264] Linking target lib/librte_pci.so.24.0 00:01:43.911 [233/264] Linking target lib/librte_ring.so.24.0 00:01:43.911 [234/264] Linking target lib/librte_timer.so.24.0 00:01:43.911 [235/264] Linking target lib/librte_meter.so.24.0 00:01:43.911 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:43.911 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:43.911 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:43.911 [239/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:43.911 [240/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:43.911 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:43.911 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:44.174 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:44.174 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:44.174 [245/264] Linking target lib/librte_mempool.so.24.0 00:01:44.174 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:44.174 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:44.174 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:44.174 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:44.438 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:44.438 [251/264] Linking target lib/librte_reorder.so.24.0 00:01:44.438 [252/264] Linking target lib/librte_net.so.24.0 00:01:44.438 [253/264] Linking target lib/librte_compressdev.so.24.0 00:01:44.438 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:44.701 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:44.701 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:44.701 [257/264] Linking target lib/librte_cmdline.so.24.0 00:01:44.701 [258/264] Linking target lib/librte_hash.so.24.0 00:01:44.701 [259/264] Linking target lib/librte_security.so.24.0 00:01:44.701 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:44.963 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:44.963 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:44.963 [263/264] Linking target lib/librte_power.so.24.0 00:01:44.963 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:44.963 INFO: autodetecting backend as ninja 00:01:44.963 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:46.356 CC lib/ut_mock/mock.o 00:01:46.356 CC lib/ut/ut.o 00:01:46.356 CC lib/log/log.o 00:01:46.356 CC lib/log/log_flags.o 00:01:46.356 CC lib/log/log_deprecated.o 00:01:46.356 LIB libspdk_ut_mock.a 00:01:46.356 LIB libspdk_log.a 00:01:46.356 LIB libspdk_ut.a 00:01:46.356 SO libspdk_ut_mock.so.6.0 00:01:46.356 SO libspdk_ut.so.2.0 00:01:46.356 SO libspdk_log.so.7.0 00:01:46.356 SYMLINK libspdk_ut_mock.so 00:01:46.356 SYMLINK libspdk_ut.so 00:01:46.356 SYMLINK libspdk_log.so 00:01:46.623 CC lib/dma/dma.o 00:01:46.886 CC lib/util/base64.o 00:01:46.886 CXX lib/trace_parser/trace.o 00:01:46.886 CC lib/util/bit_array.o 00:01:46.886 CC lib/util/cpuset.o 00:01:46.886 CC lib/util/crc16.o 00:01:46.886 CC lib/util/crc32.o 00:01:46.886 CC lib/util/crc32c.o 00:01:46.886 CC lib/ioat/ioat.o 00:01:46.886 CC lib/util/crc32_ieee.o 00:01:46.886 CC lib/util/crc64.o 00:01:46.886 CC lib/util/dif.o 00:01:46.886 CC lib/util/fd.o 00:01:46.886 CC lib/util/file.o 00:01:46.886 CC lib/util/hexlify.o 00:01:46.886 CC lib/util/iov.o 00:01:46.886 CC lib/util/math.o 00:01:46.886 CC lib/util/pipe.o 00:01:46.886 CC lib/util/uuid.o 00:01:46.886 CC lib/util/strerror_tls.o 00:01:46.886 CC lib/util/string.o 00:01:46.886 CC lib/util/fd_group.o 00:01:46.886 CC lib/util/zipf.o 00:01:46.886 CC lib/util/xor.o 00:01:46.886 CC lib/vfio_user/host/vfio_user_pci.o 00:01:46.886 CC lib/vfio_user/host/vfio_user.o 00:01:46.886 LIB libspdk_dma.a 00:01:46.886 SO libspdk_dma.so.4.0 00:01:47.149 SYMLINK libspdk_dma.so 00:01:47.149 LIB libspdk_ioat.a 00:01:47.149 SO libspdk_ioat.so.7.0 00:01:47.149 LIB libspdk_vfio_user.a 00:01:47.149 SYMLINK libspdk_ioat.so 00:01:47.149 SO libspdk_vfio_user.so.5.0 00:01:47.149 LIB libspdk_util.a 00:01:47.412 SYMLINK libspdk_vfio_user.so 00:01:47.412 SO libspdk_util.so.9.0 00:01:47.412 SYMLINK libspdk_util.so 00:01:47.987 CC lib/json/json_parse.o 00:01:47.987 CC lib/json/json_util.o 00:01:47.987 CC lib/json/json_write.o 00:01:47.987 CC lib/rdma/common.o 00:01:47.987 CC lib/idxd/idxd.o 00:01:47.987 CC lib/rdma/rdma_verbs.o 00:01:47.987 CC lib/conf/conf.o 00:01:47.987 CC lib/idxd/idxd_user.o 00:01:47.987 CC lib/vmd/vmd.o 00:01:47.987 CC lib/env_dpdk/env.o 00:01:47.987 CC lib/vmd/led.o 00:01:47.987 CC lib/env_dpdk/memory.o 00:01:47.987 CC lib/env_dpdk/pci.o 00:01:47.987 CC lib/env_dpdk/init.o 00:01:47.987 CC lib/env_dpdk/threads.o 00:01:47.987 CC lib/env_dpdk/pci_ioat.o 00:01:47.987 CC lib/env_dpdk/pci_virtio.o 00:01:47.987 CC lib/env_dpdk/pci_vmd.o 00:01:47.987 CC lib/env_dpdk/pci_idxd.o 00:01:47.987 CC lib/env_dpdk/pci_event.o 00:01:47.987 CC lib/env_dpdk/sigbus_handler.o 00:01:47.987 CC lib/env_dpdk/pci_dpdk.o 00:01:47.987 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:47.987 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:47.987 LIB libspdk_conf.a 00:01:48.250 SO libspdk_conf.so.6.0 00:01:48.250 LIB libspdk_rdma.a 00:01:48.250 LIB libspdk_json.a 00:01:48.250 SYMLINK libspdk_conf.so 00:01:48.250 SO libspdk_rdma.so.6.0 00:01:48.250 SO libspdk_json.so.6.0 00:01:48.250 LIB libspdk_trace_parser.a 00:01:48.250 SO libspdk_trace_parser.so.5.0 00:01:48.250 SYMLINK libspdk_rdma.so 00:01:48.250 SYMLINK libspdk_json.so 00:01:48.250 SYMLINK libspdk_trace_parser.so 00:01:48.512 LIB libspdk_idxd.a 00:01:48.512 SO libspdk_idxd.so.12.0 00:01:48.512 LIB libspdk_vmd.a 00:01:48.512 SO libspdk_vmd.so.6.0 00:01:48.512 SYMLINK libspdk_idxd.so 00:01:48.512 SYMLINK libspdk_vmd.so 00:01:48.512 CC lib/jsonrpc/jsonrpc_server.o 00:01:48.512 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:48.512 CC lib/jsonrpc/jsonrpc_client.o 00:01:48.512 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:48.774 LIB libspdk_jsonrpc.a 00:01:49.037 SO libspdk_jsonrpc.so.6.0 00:01:49.037 SYMLINK libspdk_jsonrpc.so 00:01:49.037 LIB libspdk_env_dpdk.a 00:01:49.037 SO libspdk_env_dpdk.so.14.0 00:01:49.299 SYMLINK libspdk_env_dpdk.so 00:01:49.299 CC lib/rpc/rpc.o 00:01:49.561 LIB libspdk_rpc.a 00:01:49.561 SO libspdk_rpc.so.6.0 00:01:49.561 SYMLINK libspdk_rpc.so 00:01:50.137 CC lib/keyring/keyring.o 00:01:50.137 CC lib/keyring/keyring_rpc.o 00:01:50.137 CC lib/notify/notify.o 00:01:50.137 CC lib/notify/notify_rpc.o 00:01:50.137 CC lib/trace/trace.o 00:01:50.137 CC lib/trace/trace_flags.o 00:01:50.137 CC lib/trace/trace_rpc.o 00:01:50.137 LIB libspdk_notify.a 00:01:50.137 SO libspdk_notify.so.6.0 00:01:50.400 LIB libspdk_keyring.a 00:01:50.400 LIB libspdk_trace.a 00:01:50.400 SO libspdk_keyring.so.1.0 00:01:50.400 SYMLINK libspdk_notify.so 00:01:50.400 SO libspdk_trace.so.10.0 00:01:50.400 SYMLINK libspdk_keyring.so 00:01:50.400 SYMLINK libspdk_trace.so 00:01:50.662 CC lib/thread/thread.o 00:01:50.662 CC lib/thread/iobuf.o 00:01:50.662 CC lib/sock/sock.o 00:01:50.662 CC lib/sock/sock_rpc.o 00:01:51.237 LIB libspdk_sock.a 00:01:51.237 SO libspdk_sock.so.9.0 00:01:51.237 SYMLINK libspdk_sock.so 00:01:51.499 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:51.499 CC lib/nvme/nvme_ctrlr.o 00:01:51.499 CC lib/nvme/nvme_fabric.o 00:01:51.499 CC lib/nvme/nvme_ns_cmd.o 00:01:51.499 CC lib/nvme/nvme_ns.o 00:01:51.499 CC lib/nvme/nvme_pcie_common.o 00:01:51.499 CC lib/nvme/nvme_pcie.o 00:01:51.499 CC lib/nvme/nvme_qpair.o 00:01:51.499 CC lib/nvme/nvme.o 00:01:51.499 CC lib/nvme/nvme_quirks.o 00:01:51.499 CC lib/nvme/nvme_transport.o 00:01:51.499 CC lib/nvme/nvme_discovery.o 00:01:51.499 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:51.499 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:51.499 CC lib/nvme/nvme_tcp.o 00:01:51.499 CC lib/nvme/nvme_opal.o 00:01:51.499 CC lib/nvme/nvme_io_msg.o 00:01:51.499 CC lib/nvme/nvme_poll_group.o 00:01:51.499 CC lib/nvme/nvme_zns.o 00:01:51.499 CC lib/nvme/nvme_stubs.o 00:01:51.761 CC lib/nvme/nvme_auth.o 00:01:51.761 CC lib/nvme/nvme_cuse.o 00:01:51.761 CC lib/nvme/nvme_vfio_user.o 00:01:51.761 CC lib/nvme/nvme_rdma.o 00:01:52.023 LIB libspdk_thread.a 00:01:52.023 SO libspdk_thread.so.10.0 00:01:52.286 SYMLINK libspdk_thread.so 00:01:52.549 CC lib/virtio/virtio.o 00:01:52.549 CC lib/virtio/virtio_vhost_user.o 00:01:52.549 CC lib/virtio/virtio_vfio_user.o 00:01:52.549 CC lib/virtio/virtio_pci.o 00:01:52.549 CC lib/vfu_tgt/tgt_endpoint.o 00:01:52.549 CC lib/vfu_tgt/tgt_rpc.o 00:01:52.549 CC lib/accel/accel.o 00:01:52.549 CC lib/accel/accel_rpc.o 00:01:52.549 CC lib/accel/accel_sw.o 00:01:52.549 CC lib/init/json_config.o 00:01:52.549 CC lib/init/subsystem.o 00:01:52.549 CC lib/init/subsystem_rpc.o 00:01:52.549 CC lib/init/rpc.o 00:01:52.549 CC lib/blob/blobstore.o 00:01:52.549 CC lib/blob/request.o 00:01:52.549 CC lib/blob/zeroes.o 00:01:52.549 CC lib/blob/blob_bs_dev.o 00:01:52.811 LIB libspdk_init.a 00:01:52.811 SO libspdk_init.so.5.0 00:01:52.811 LIB libspdk_vfu_tgt.a 00:01:52.811 LIB libspdk_virtio.a 00:01:52.811 SO libspdk_vfu_tgt.so.3.0 00:01:52.811 SO libspdk_virtio.so.7.0 00:01:53.074 SYMLINK libspdk_init.so 00:01:53.074 SYMLINK libspdk_vfu_tgt.so 00:01:53.074 SYMLINK libspdk_virtio.so 00:01:53.335 CC lib/event/app.o 00:01:53.335 CC lib/event/reactor.o 00:01:53.335 CC lib/event/log_rpc.o 00:01:53.335 CC lib/event/app_rpc.o 00:01:53.335 CC lib/event/scheduler_static.o 00:01:53.335 LIB libspdk_accel.a 00:01:53.597 SO libspdk_accel.so.15.0 00:01:53.597 LIB libspdk_nvme.a 00:01:53.597 SYMLINK libspdk_accel.so 00:01:53.597 LIB libspdk_event.a 00:01:53.597 SO libspdk_nvme.so.13.0 00:01:53.597 SO libspdk_event.so.13.0 00:01:53.860 SYMLINK libspdk_event.so 00:01:53.860 CC lib/bdev/bdev.o 00:01:53.860 CC lib/bdev/bdev_rpc.o 00:01:53.860 CC lib/bdev/bdev_zone.o 00:01:53.860 CC lib/bdev/part.o 00:01:53.860 CC lib/bdev/scsi_nvme.o 00:01:53.860 SYMLINK libspdk_nvme.so 00:01:55.250 LIB libspdk_blob.a 00:01:55.250 SO libspdk_blob.so.11.0 00:01:55.250 SYMLINK libspdk_blob.so 00:01:55.512 CC lib/blobfs/blobfs.o 00:01:55.512 CC lib/lvol/lvol.o 00:01:55.512 CC lib/blobfs/tree.o 00:01:56.086 LIB libspdk_bdev.a 00:01:56.086 SO libspdk_bdev.so.15.0 00:01:56.348 LIB libspdk_blobfs.a 00:01:56.348 LIB libspdk_lvol.a 00:01:56.348 SO libspdk_blobfs.so.10.0 00:01:56.348 SYMLINK libspdk_bdev.so 00:01:56.348 SO libspdk_lvol.so.10.0 00:01:56.348 SYMLINK libspdk_blobfs.so 00:01:56.348 SYMLINK libspdk_lvol.so 00:01:56.609 CC lib/ftl/ftl_core.o 00:01:56.609 CC lib/ftl/ftl_init.o 00:01:56.609 CC lib/nvmf/ctrlr.o 00:01:56.609 CC lib/nvmf/ctrlr_discovery.o 00:01:56.609 CC lib/ftl/ftl_layout.o 00:01:56.609 CC lib/ftl/ftl_debug.o 00:01:56.609 CC lib/nvmf/ctrlr_bdev.o 00:01:56.609 CC lib/scsi/dev.o 00:01:56.609 CC lib/ftl/ftl_io.o 00:01:56.609 CC lib/nvmf/subsystem.o 00:01:56.609 CC lib/nvmf/nvmf.o 00:01:56.609 CC lib/ftl/ftl_sb.o 00:01:56.609 CC lib/scsi/lun.o 00:01:56.609 CC lib/nbd/nbd.o 00:01:56.609 CC lib/scsi/port.o 00:01:56.609 CC lib/nvmf/nvmf_rpc.o 00:01:56.609 CC lib/ftl/ftl_l2p.o 00:01:56.609 CC lib/nvmf/transport.o 00:01:56.609 CC lib/scsi/scsi.o 00:01:56.609 CC lib/ftl/ftl_l2p_flat.o 00:01:56.609 CC lib/nbd/nbd_rpc.o 00:01:56.609 CC lib/ublk/ublk.o 00:01:56.609 CC lib/nvmf/tcp.o 00:01:56.609 CC lib/scsi/scsi_bdev.o 00:01:56.609 CC lib/ftl/ftl_nv_cache.o 00:01:56.609 CC lib/ublk/ublk_rpc.o 00:01:56.609 CC lib/nvmf/stubs.o 00:01:56.609 CC lib/scsi/scsi_pr.o 00:01:56.609 CC lib/ftl/ftl_band.o 00:01:56.609 CC lib/scsi/scsi_rpc.o 00:01:56.609 CC lib/nvmf/vfio_user.o 00:01:56.609 CC lib/ftl/ftl_band_ops.o 00:01:56.609 CC lib/scsi/task.o 00:01:56.609 CC lib/nvmf/rdma.o 00:01:56.609 CC lib/ftl/ftl_writer.o 00:01:56.609 CC lib/nvmf/auth.o 00:01:56.609 CC lib/ftl/ftl_rq.o 00:01:56.609 CC lib/ftl/ftl_reloc.o 00:01:56.609 CC lib/ftl/ftl_l2p_cache.o 00:01:56.609 CC lib/ftl/ftl_p2l.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.609 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.609 CC lib/ftl/utils/ftl_conf.o 00:01:56.609 CC lib/ftl/utils/ftl_md.o 00:01:56.609 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.609 CC lib/ftl/utils/ftl_mempool.o 00:01:56.609 CC lib/ftl/utils/ftl_property.o 00:01:56.609 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.609 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.609 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.609 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.609 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.609 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.609 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.609 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.609 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.609 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.609 CC lib/ftl/base/ftl_base_dev.o 00:01:56.610 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.610 CC lib/ftl/ftl_trace.o 00:01:57.180 LIB libspdk_nbd.a 00:01:57.440 SO libspdk_nbd.so.7.0 00:01:57.440 LIB libspdk_scsi.a 00:01:57.440 SYMLINK libspdk_nbd.so 00:01:57.440 SO libspdk_scsi.so.9.0 00:01:57.441 LIB libspdk_ublk.a 00:01:57.441 SYMLINK libspdk_scsi.so 00:01:57.704 SO libspdk_ublk.so.3.0 00:01:57.704 SYMLINK libspdk_ublk.so 00:01:57.704 LIB libspdk_ftl.a 00:01:57.965 CC lib/vhost/vhost.o 00:01:57.966 CC lib/vhost/vhost_rpc.o 00:01:57.966 CC lib/vhost/vhost_scsi.o 00:01:57.966 CC lib/vhost/vhost_blk.o 00:01:57.966 CC lib/vhost/rte_vhost_user.o 00:01:57.966 CC lib/iscsi/conn.o 00:01:57.966 CC lib/iscsi/init_grp.o 00:01:57.966 CC lib/iscsi/iscsi.o 00:01:57.966 CC lib/iscsi/md5.o 00:01:57.966 CC lib/iscsi/tgt_node.o 00:01:57.966 CC lib/iscsi/param.o 00:01:57.966 CC lib/iscsi/portal_grp.o 00:01:57.966 CC lib/iscsi/iscsi_subsystem.o 00:01:57.966 CC lib/iscsi/iscsi_rpc.o 00:01:57.966 CC lib/iscsi/task.o 00:01:57.966 SO libspdk_ftl.so.9.0 00:01:58.539 SYMLINK libspdk_ftl.so 00:01:58.801 LIB libspdk_nvmf.a 00:01:58.801 SO libspdk_nvmf.so.18.0 00:01:58.801 LIB libspdk_vhost.a 00:01:59.063 SO libspdk_vhost.so.8.0 00:01:59.063 SYMLINK libspdk_nvmf.so 00:01:59.063 SYMLINK libspdk_vhost.so 00:01:59.063 LIB libspdk_iscsi.a 00:01:59.063 SO libspdk_iscsi.so.8.0 00:01:59.332 SYMLINK libspdk_iscsi.so 00:01:59.916 CC module/vfu_device/vfu_virtio.o 00:01:59.916 CC module/vfu_device/vfu_virtio_blk.o 00:01:59.916 CC module/vfu_device/vfu_virtio_scsi.o 00:01:59.916 CC module/vfu_device/vfu_virtio_rpc.o 00:01:59.916 CC module/env_dpdk/env_dpdk_rpc.o 00:02:00.178 LIB libspdk_env_dpdk_rpc.a 00:02:00.178 CC module/accel/ioat/accel_ioat.o 00:02:00.178 CC module/accel/iaa/accel_iaa.o 00:02:00.178 CC module/blob/bdev/blob_bdev.o 00:02:00.178 CC module/accel/ioat/accel_ioat_rpc.o 00:02:00.178 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:00.178 CC module/accel/iaa/accel_iaa_rpc.o 00:02:00.178 CC module/accel/dsa/accel_dsa.o 00:02:00.178 CC module/accel/error/accel_error.o 00:02:00.178 CC module/accel/dsa/accel_dsa_rpc.o 00:02:00.178 CC module/accel/error/accel_error_rpc.o 00:02:00.178 CC module/sock/posix/posix.o 00:02:00.178 CC module/scheduler/gscheduler/gscheduler.o 00:02:00.178 CC module/keyring/file/keyring.o 00:02:00.178 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:00.178 CC module/keyring/file/keyring_rpc.o 00:02:00.178 SO libspdk_env_dpdk_rpc.so.6.0 00:02:00.178 SYMLINK libspdk_env_dpdk_rpc.so 00:02:00.178 LIB libspdk_scheduler_gscheduler.a 00:02:00.178 LIB libspdk_keyring_file.a 00:02:00.178 LIB libspdk_scheduler_dynamic.a 00:02:00.178 LIB libspdk_scheduler_dpdk_governor.a 00:02:00.178 LIB libspdk_accel_ioat.a 00:02:00.178 SO libspdk_scheduler_gscheduler.so.4.0 00:02:00.178 LIB libspdk_accel_error.a 00:02:00.439 LIB libspdk_accel_iaa.a 00:02:00.439 SO libspdk_keyring_file.so.1.0 00:02:00.439 SO libspdk_scheduler_dynamic.so.4.0 00:02:00.439 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:00.439 SO libspdk_accel_ioat.so.6.0 00:02:00.439 LIB libspdk_accel_dsa.a 00:02:00.439 SO libspdk_accel_error.so.2.0 00:02:00.439 SO libspdk_accel_iaa.so.3.0 00:02:00.439 SYMLINK libspdk_scheduler_gscheduler.so 00:02:00.439 LIB libspdk_blob_bdev.a 00:02:00.439 SYMLINK libspdk_keyring_file.so 00:02:00.439 SO libspdk_accel_dsa.so.5.0 00:02:00.439 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:00.439 SYMLINK libspdk_scheduler_dynamic.so 00:02:00.439 SYMLINK libspdk_accel_ioat.so 00:02:00.439 SO libspdk_blob_bdev.so.11.0 00:02:00.439 SYMLINK libspdk_accel_error.so 00:02:00.439 SYMLINK libspdk_accel_iaa.so 00:02:00.439 LIB libspdk_vfu_device.a 00:02:00.439 SYMLINK libspdk_accel_dsa.so 00:02:00.439 SYMLINK libspdk_blob_bdev.so 00:02:00.439 SO libspdk_vfu_device.so.3.0 00:02:00.701 SYMLINK libspdk_vfu_device.so 00:02:00.701 LIB libspdk_sock_posix.a 00:02:00.701 SO libspdk_sock_posix.so.6.0 00:02:00.701 SYMLINK libspdk_sock_posix.so 00:02:00.963 CC module/blobfs/bdev/blobfs_bdev.o 00:02:00.963 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:00.963 CC module/bdev/error/vbdev_error.o 00:02:00.963 CC module/bdev/error/vbdev_error_rpc.o 00:02:00.963 CC module/bdev/null/bdev_null.o 00:02:00.963 CC module/bdev/delay/vbdev_delay.o 00:02:00.963 CC module/bdev/null/bdev_null_rpc.o 00:02:00.963 CC module/bdev/malloc/bdev_malloc.o 00:02:00.963 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:00.963 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:00.963 CC module/bdev/raid/bdev_raid.o 00:02:00.963 CC module/bdev/gpt/gpt.o 00:02:00.963 CC module/bdev/raid/bdev_raid_rpc.o 00:02:00.963 CC module/bdev/raid/bdev_raid_sb.o 00:02:00.963 CC module/bdev/nvme/bdev_nvme.o 00:02:00.963 CC module/bdev/gpt/vbdev_gpt.o 00:02:00.963 CC module/bdev/raid/raid0.o 00:02:00.963 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:00.963 CC module/bdev/raid/raid1.o 00:02:00.963 CC module/bdev/nvme/nvme_rpc.o 00:02:00.963 CC module/bdev/lvol/vbdev_lvol.o 00:02:00.963 CC module/bdev/raid/concat.o 00:02:00.963 CC module/bdev/passthru/vbdev_passthru.o 00:02:00.963 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:00.963 CC module/bdev/aio/bdev_aio_rpc.o 00:02:00.963 CC module/bdev/iscsi/bdev_iscsi.o 00:02:00.963 CC module/bdev/split/vbdev_split.o 00:02:00.963 CC module/bdev/ftl/bdev_ftl.o 00:02:00.963 CC module/bdev/nvme/vbdev_opal.o 00:02:00.963 CC module/bdev/aio/bdev_aio.o 00:02:00.963 CC module/bdev/nvme/bdev_mdns_client.o 00:02:00.963 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:00.963 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:00.963 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:00.963 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:00.963 CC module/bdev/split/vbdev_split_rpc.o 00:02:00.963 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:00.963 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:00.963 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:00.963 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:00.963 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:00.963 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:01.223 LIB libspdk_blobfs_bdev.a 00:02:01.485 SO libspdk_blobfs_bdev.so.6.0 00:02:01.485 LIB libspdk_bdev_split.a 00:02:01.485 LIB libspdk_bdev_error.a 00:02:01.485 LIB libspdk_bdev_null.a 00:02:01.485 LIB libspdk_bdev_gpt.a 00:02:01.485 SO libspdk_bdev_split.so.6.0 00:02:01.485 SYMLINK libspdk_blobfs_bdev.so 00:02:01.485 LIB libspdk_bdev_aio.a 00:02:01.485 SO libspdk_bdev_error.so.6.0 00:02:01.485 SO libspdk_bdev_null.so.6.0 00:02:01.485 LIB libspdk_bdev_ftl.a 00:02:01.485 SO libspdk_bdev_gpt.so.6.0 00:02:01.485 SO libspdk_bdev_aio.so.6.0 00:02:01.485 LIB libspdk_bdev_passthru.a 00:02:01.485 LIB libspdk_bdev_delay.a 00:02:01.485 LIB libspdk_bdev_malloc.a 00:02:01.485 SO libspdk_bdev_ftl.so.6.0 00:02:01.485 SYMLINK libspdk_bdev_split.so 00:02:01.485 SYMLINK libspdk_bdev_error.so 00:02:01.485 LIB libspdk_bdev_zone_block.a 00:02:01.485 SO libspdk_bdev_passthru.so.6.0 00:02:01.485 SO libspdk_bdev_delay.so.6.0 00:02:01.485 SYMLINK libspdk_bdev_null.so 00:02:01.485 SO libspdk_bdev_malloc.so.6.0 00:02:01.485 SYMLINK libspdk_bdev_gpt.so 00:02:01.485 SYMLINK libspdk_bdev_aio.so 00:02:01.485 LIB libspdk_bdev_iscsi.a 00:02:01.485 SO libspdk_bdev_zone_block.so.6.0 00:02:01.485 SYMLINK libspdk_bdev_ftl.so 00:02:01.485 SYMLINK libspdk_bdev_passthru.so 00:02:01.485 SO libspdk_bdev_iscsi.so.6.0 00:02:01.485 SYMLINK libspdk_bdev_malloc.so 00:02:01.485 SYMLINK libspdk_bdev_delay.so 00:02:01.748 SYMLINK libspdk_bdev_zone_block.so 00:02:01.748 LIB libspdk_bdev_lvol.a 00:02:01.748 SYMLINK libspdk_bdev_iscsi.so 00:02:01.748 LIB libspdk_bdev_virtio.a 00:02:01.748 SO libspdk_bdev_lvol.so.6.0 00:02:01.748 SO libspdk_bdev_virtio.so.6.0 00:02:01.748 SYMLINK libspdk_bdev_lvol.so 00:02:01.748 SYMLINK libspdk_bdev_virtio.so 00:02:02.011 LIB libspdk_bdev_raid.a 00:02:02.011 SO libspdk_bdev_raid.so.6.0 00:02:02.011 SYMLINK libspdk_bdev_raid.so 00:02:02.957 LIB libspdk_bdev_nvme.a 00:02:03.220 SO libspdk_bdev_nvme.so.7.0 00:02:03.220 SYMLINK libspdk_bdev_nvme.so 00:02:03.794 CC module/event/subsystems/iobuf/iobuf.o 00:02:03.794 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:03.794 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:03.794 CC module/event/subsystems/keyring/keyring.o 00:02:03.794 CC module/event/subsystems/vmd/vmd.o 00:02:03.794 CC module/event/subsystems/sock/sock.o 00:02:03.794 CC module/event/subsystems/scheduler/scheduler.o 00:02:03.794 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:03.794 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:04.056 LIB libspdk_event_vhost_blk.a 00:02:04.056 LIB libspdk_event_keyring.a 00:02:04.056 LIB libspdk_event_sock.a 00:02:04.056 LIB libspdk_event_scheduler.a 00:02:04.056 LIB libspdk_event_vmd.a 00:02:04.056 LIB libspdk_event_iobuf.a 00:02:04.056 LIB libspdk_event_vfu_tgt.a 00:02:04.056 SO libspdk_event_vhost_blk.so.3.0 00:02:04.056 SO libspdk_event_keyring.so.1.0 00:02:04.056 SO libspdk_event_sock.so.5.0 00:02:04.056 SO libspdk_event_scheduler.so.4.0 00:02:04.056 SO libspdk_event_iobuf.so.3.0 00:02:04.056 SO libspdk_event_vmd.so.6.0 00:02:04.056 SO libspdk_event_vfu_tgt.so.3.0 00:02:04.056 SYMLINK libspdk_event_vhost_blk.so 00:02:04.356 SYMLINK libspdk_event_keyring.so 00:02:04.356 SYMLINK libspdk_event_scheduler.so 00:02:04.356 SYMLINK libspdk_event_sock.so 00:02:04.356 SYMLINK libspdk_event_iobuf.so 00:02:04.356 SYMLINK libspdk_event_vmd.so 00:02:04.356 SYMLINK libspdk_event_vfu_tgt.so 00:02:04.617 CC module/event/subsystems/accel/accel.o 00:02:04.617 LIB libspdk_event_accel.a 00:02:04.878 SO libspdk_event_accel.so.6.0 00:02:04.878 SYMLINK libspdk_event_accel.so 00:02:05.139 CC module/event/subsystems/bdev/bdev.o 00:02:05.561 LIB libspdk_event_bdev.a 00:02:05.561 SO libspdk_event_bdev.so.6.0 00:02:05.561 SYMLINK libspdk_event_bdev.so 00:02:05.823 CC module/event/subsystems/scsi/scsi.o 00:02:05.823 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:05.823 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:05.823 CC module/event/subsystems/nbd/nbd.o 00:02:05.823 CC module/event/subsystems/ublk/ublk.o 00:02:06.085 LIB libspdk_event_ublk.a 00:02:06.085 LIB libspdk_event_scsi.a 00:02:06.085 LIB libspdk_event_nbd.a 00:02:06.085 SO libspdk_event_ublk.so.3.0 00:02:06.085 SO libspdk_event_nbd.so.6.0 00:02:06.085 SO libspdk_event_scsi.so.6.0 00:02:06.085 LIB libspdk_event_nvmf.a 00:02:06.085 SYMLINK libspdk_event_ublk.so 00:02:06.085 SYMLINK libspdk_event_nbd.so 00:02:06.085 SYMLINK libspdk_event_scsi.so 00:02:06.085 SO libspdk_event_nvmf.so.6.0 00:02:06.085 SYMLINK libspdk_event_nvmf.so 00:02:06.345 CC module/event/subsystems/iscsi/iscsi.o 00:02:06.345 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:06.607 LIB libspdk_event_vhost_scsi.a 00:02:06.607 LIB libspdk_event_iscsi.a 00:02:06.607 SO libspdk_event_vhost_scsi.so.3.0 00:02:06.607 SO libspdk_event_iscsi.so.6.0 00:02:06.607 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.869 SYMLINK libspdk_event_iscsi.so 00:02:06.869 SO libspdk.so.6.0 00:02:06.869 SYMLINK libspdk.so 00:02:07.442 CC test/rpc_client/rpc_client_test.o 00:02:07.442 CC app/trace_record/trace_record.o 00:02:07.442 CXX app/trace/trace.o 00:02:07.442 CC app/spdk_lspci/spdk_lspci.o 00:02:07.442 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.442 TEST_HEADER include/spdk/accel.h 00:02:07.442 TEST_HEADER include/spdk/accel_module.h 00:02:07.443 TEST_HEADER include/spdk/assert.h 00:02:07.443 CC app/spdk_nvme_perf/perf.o 00:02:07.443 TEST_HEADER include/spdk/base64.h 00:02:07.443 TEST_HEADER include/spdk/barrier.h 00:02:07.443 TEST_HEADER include/spdk/bdev.h 00:02:07.443 TEST_HEADER include/spdk/bdev_module.h 00:02:07.443 TEST_HEADER include/spdk/bit_array.h 00:02:07.443 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.443 TEST_HEADER include/spdk/bit_pool.h 00:02:07.443 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.443 CC app/spdk_top/spdk_top.o 00:02:07.443 CC app/spdk_nvme_identify/identify.o 00:02:07.443 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.443 TEST_HEADER include/spdk/blob.h 00:02:07.443 TEST_HEADER include/spdk/blobfs.h 00:02:07.443 TEST_HEADER include/spdk/conf.h 00:02:07.443 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.443 TEST_HEADER include/spdk/cpuset.h 00:02:07.443 TEST_HEADER include/spdk/config.h 00:02:07.443 CC app/nvmf_tgt/nvmf_main.o 00:02:07.443 TEST_HEADER include/spdk/crc16.h 00:02:07.443 TEST_HEADER include/spdk/crc32.h 00:02:07.443 TEST_HEADER include/spdk/crc64.h 00:02:07.443 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.443 TEST_HEADER include/spdk/endian.h 00:02:07.443 TEST_HEADER include/spdk/dif.h 00:02:07.443 TEST_HEADER include/spdk/dma.h 00:02:07.443 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.443 TEST_HEADER include/spdk/env.h 00:02:07.443 TEST_HEADER include/spdk/event.h 00:02:07.443 TEST_HEADER include/spdk/fd_group.h 00:02:07.443 TEST_HEADER include/spdk/fd.h 00:02:07.443 CC app/spdk_dd/spdk_dd.o 00:02:07.443 TEST_HEADER include/spdk/file.h 00:02:07.443 TEST_HEADER include/spdk/ftl.h 00:02:07.443 CC app/vhost/vhost.o 00:02:07.443 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.443 TEST_HEADER include/spdk/histogram_data.h 00:02:07.443 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.443 TEST_HEADER include/spdk/idxd.h 00:02:07.443 TEST_HEADER include/spdk/hexlify.h 00:02:07.443 CC app/spdk_tgt/spdk_tgt.o 00:02:07.443 TEST_HEADER include/spdk/ioat.h 00:02:07.443 TEST_HEADER include/spdk/init.h 00:02:07.443 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.443 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.443 TEST_HEADER include/spdk/json.h 00:02:07.443 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.443 TEST_HEADER include/spdk/keyring.h 00:02:07.443 TEST_HEADER include/spdk/keyring_module.h 00:02:07.443 TEST_HEADER include/spdk/likely.h 00:02:07.443 TEST_HEADER include/spdk/log.h 00:02:07.443 TEST_HEADER include/spdk/lvol.h 00:02:07.443 TEST_HEADER include/spdk/memory.h 00:02:07.443 TEST_HEADER include/spdk/mmio.h 00:02:07.443 TEST_HEADER include/spdk/notify.h 00:02:07.443 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.443 TEST_HEADER include/spdk/nbd.h 00:02:07.443 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.443 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.443 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.443 TEST_HEADER include/spdk/nvme.h 00:02:07.443 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.443 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.443 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.443 TEST_HEADER include/spdk/nvmf.h 00:02:07.443 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.443 TEST_HEADER include/spdk/opal.h 00:02:07.443 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.443 TEST_HEADER include/spdk/opal_spec.h 00:02:07.443 TEST_HEADER include/spdk/pci_ids.h 00:02:07.443 TEST_HEADER include/spdk/pipe.h 00:02:07.443 TEST_HEADER include/spdk/queue.h 00:02:07.443 TEST_HEADER include/spdk/reduce.h 00:02:07.443 TEST_HEADER include/spdk/scheduler.h 00:02:07.443 TEST_HEADER include/spdk/rpc.h 00:02:07.443 TEST_HEADER include/spdk/scsi.h 00:02:07.443 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.443 TEST_HEADER include/spdk/stdinc.h 00:02:07.443 TEST_HEADER include/spdk/sock.h 00:02:07.443 TEST_HEADER include/spdk/trace.h 00:02:07.443 TEST_HEADER include/spdk/string.h 00:02:07.443 TEST_HEADER include/spdk/thread.h 00:02:07.443 TEST_HEADER include/spdk/tree.h 00:02:07.443 TEST_HEADER include/spdk/ublk.h 00:02:07.443 TEST_HEADER include/spdk/trace_parser.h 00:02:07.729 TEST_HEADER include/spdk/util.h 00:02:07.729 TEST_HEADER include/spdk/version.h 00:02:07.729 TEST_HEADER include/spdk/uuid.h 00:02:07.729 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.729 TEST_HEADER include/spdk/vhost.h 00:02:07.729 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.729 TEST_HEADER include/spdk/vmd.h 00:02:07.729 TEST_HEADER include/spdk/xor.h 00:02:07.729 TEST_HEADER include/spdk/zipf.h 00:02:07.729 CXX test/cpp_headers/accel.o 00:02:07.729 CXX test/cpp_headers/assert.o 00:02:07.729 CXX test/cpp_headers/barrier.o 00:02:07.729 CXX test/cpp_headers/accel_module.o 00:02:07.729 CXX test/cpp_headers/base64.o 00:02:07.729 CXX test/cpp_headers/bdev.o 00:02:07.729 CXX test/cpp_headers/bdev_zone.o 00:02:07.729 CXX test/cpp_headers/bit_array.o 00:02:07.729 CXX test/cpp_headers/bdev_module.o 00:02:07.729 CXX test/cpp_headers/bit_pool.o 00:02:07.729 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.729 CXX test/cpp_headers/blobfs.o 00:02:07.729 CXX test/cpp_headers/blob_bdev.o 00:02:07.729 CXX test/cpp_headers/blob.o 00:02:07.729 CXX test/cpp_headers/conf.o 00:02:07.729 CXX test/cpp_headers/cpuset.o 00:02:07.729 CXX test/cpp_headers/config.o 00:02:07.729 CXX test/cpp_headers/crc16.o 00:02:07.729 CXX test/cpp_headers/crc32.o 00:02:07.729 CXX test/cpp_headers/crc64.o 00:02:07.729 CXX test/cpp_headers/endian.o 00:02:07.729 CXX test/cpp_headers/dif.o 00:02:07.729 CXX test/cpp_headers/dma.o 00:02:07.729 CXX test/cpp_headers/env_dpdk.o 00:02:07.729 CXX test/cpp_headers/fd.o 00:02:07.729 CXX test/cpp_headers/event.o 00:02:07.729 CXX test/cpp_headers/env.o 00:02:07.729 CXX test/cpp_headers/file.o 00:02:07.729 CXX test/cpp_headers/fd_group.o 00:02:07.729 CXX test/cpp_headers/ftl.o 00:02:07.729 CXX test/cpp_headers/gpt_spec.o 00:02:07.729 CXX test/cpp_headers/hexlify.o 00:02:07.729 CXX test/cpp_headers/idxd_spec.o 00:02:07.729 CXX test/cpp_headers/idxd.o 00:02:07.729 CXX test/cpp_headers/histogram_data.o 00:02:07.729 CXX test/cpp_headers/ioat_spec.o 00:02:07.729 CXX test/cpp_headers/iscsi_spec.o 00:02:07.729 CXX test/cpp_headers/init.o 00:02:07.729 CXX test/cpp_headers/ioat.o 00:02:07.729 CXX test/cpp_headers/keyring.o 00:02:07.729 CXX test/cpp_headers/keyring_module.o 00:02:07.729 CXX test/cpp_headers/jsonrpc.o 00:02:07.729 CXX test/cpp_headers/json.o 00:02:07.730 CXX test/cpp_headers/likely.o 00:02:07.730 CXX test/cpp_headers/lvol.o 00:02:07.730 CXX test/cpp_headers/log.o 00:02:07.730 CXX test/cpp_headers/memory.o 00:02:07.730 CXX test/cpp_headers/mmio.o 00:02:07.730 CXX test/cpp_headers/notify.o 00:02:07.730 CXX test/cpp_headers/nbd.o 00:02:07.730 CXX test/cpp_headers/nvme.o 00:02:07.730 CXX test/cpp_headers/nvme_intel.o 00:02:07.730 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.730 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.730 CXX test/cpp_headers/nvme_spec.o 00:02:07.730 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.730 CXX test/cpp_headers/nvme_zns.o 00:02:07.730 CXX test/cpp_headers/nvmf_spec.o 00:02:07.730 CC examples/sock/hello_world/hello_sock.o 00:02:07.730 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.730 CXX test/cpp_headers/nvmf.o 00:02:07.730 CXX test/cpp_headers/nvmf_transport.o 00:02:07.730 CXX test/cpp_headers/opal.o 00:02:07.730 CXX test/cpp_headers/opal_spec.o 00:02:07.730 CXX test/cpp_headers/pipe.o 00:02:07.730 CXX test/cpp_headers/pci_ids.o 00:02:07.730 CXX test/cpp_headers/queue.o 00:02:07.730 CXX test/cpp_headers/reduce.o 00:02:07.730 CXX test/cpp_headers/rpc.o 00:02:07.730 CXX test/cpp_headers/scheduler.o 00:02:07.730 CXX test/cpp_headers/scsi.o 00:02:07.730 CC examples/util/zipf/zipf.o 00:02:07.730 CC test/nvme/reset/reset.o 00:02:07.730 CC test/nvme/e2edp/nvme_dp.o 00:02:07.730 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.019 CC test/nvme/connect_stress/connect_stress.o 00:02:08.019 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:08.019 CC test/thread/poller_perf/poller_perf.o 00:02:08.019 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:08.019 CC test/nvme/overhead/overhead.o 00:02:08.019 CC test/nvme/fdp/fdp.o 00:02:08.019 CC test/nvme/err_injection/err_injection.o 00:02:08.019 CC examples/vmd/led/led.o 00:02:08.019 CC test/event/event_perf/event_perf.o 00:02:08.019 CC test/nvme/sgl/sgl.o 00:02:08.019 CC test/event/reactor_perf/reactor_perf.o 00:02:08.019 CC test/nvme/aer/aer.o 00:02:08.019 CC test/nvme/simple_copy/simple_copy.o 00:02:08.019 CC test/nvme/compliance/nvme_compliance.o 00:02:08.019 CC app/fio/nvme/fio_plugin.o 00:02:08.019 CC test/nvme/boot_partition/boot_partition.o 00:02:08.019 CC examples/nvme/hello_world/hello_world.o 00:02:08.019 CXX test/cpp_headers/scsi_spec.o 00:02:08.019 CC test/env/vtophys/vtophys.o 00:02:08.019 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:08.019 CC examples/nvme/abort/abort.o 00:02:08.019 CC examples/nvme/reconnect/reconnect.o 00:02:08.019 CC examples/nvme/arbitration/arbitration.o 00:02:08.019 CC examples/accel/perf/accel_perf.o 00:02:08.019 CC test/nvme/fused_ordering/fused_ordering.o 00:02:08.019 CC test/event/reactor/reactor.o 00:02:08.019 CC test/nvme/cuse/cuse.o 00:02:08.019 CC test/app/histogram_perf/histogram_perf.o 00:02:08.019 CC examples/bdev/hello_world/hello_bdev.o 00:02:08.019 CC test/env/pci/pci_ut.o 00:02:08.019 CC test/event/scheduler/scheduler.o 00:02:08.019 CC test/nvme/startup/startup.o 00:02:08.019 CC examples/ioat/verify/verify.o 00:02:08.019 CC test/blobfs/mkfs/mkfs.o 00:02:08.019 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:08.019 CC test/app/stub/stub.o 00:02:08.019 CC test/accel/dif/dif.o 00:02:08.019 CC examples/ioat/perf/perf.o 00:02:08.019 CC test/app/jsoncat/jsoncat.o 00:02:08.019 CC test/env/memory/memory_ut.o 00:02:08.019 CC examples/nvme/hotplug/hotplug.o 00:02:08.019 CC test/bdev/bdevio/bdevio.o 00:02:08.019 CC examples/nvmf/nvmf/nvmf.o 00:02:08.019 CC test/nvme/reserve/reserve.o 00:02:08.019 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:08.019 CC test/event/app_repeat/app_repeat.o 00:02:08.019 CC examples/thread/thread/thread_ex.o 00:02:08.019 CC examples/blob/hello_world/hello_blob.o 00:02:08.019 CC examples/idxd/perf/perf.o 00:02:08.019 CC test/dma/test_dma/test_dma.o 00:02:08.019 CC examples/bdev/bdevperf/bdevperf.o 00:02:08.019 CC app/fio/bdev/fio_plugin.o 00:02:08.019 CC examples/blob/cli/blobcli.o 00:02:08.019 LINK spdk_lspci 00:02:08.313 CC test/app/bdev_svc/bdev_svc.o 00:02:08.313 LINK spdk_nvme_discover 00:02:08.313 LINK rpc_client_test 00:02:08.616 LINK interrupt_tgt 00:02:08.616 LINK nvmf_tgt 00:02:08.616 LINK spdk_tgt 00:02:08.616 LINK vhost 00:02:08.898 LINK iscsi_tgt 00:02:08.898 LINK spdk_trace_record 00:02:08.898 CC test/lvol/esnap/esnap.o 00:02:09.166 LINK lsvmd 00:02:09.166 CC test/env/mem_callbacks/mem_callbacks.o 00:02:09.166 LINK reactor_perf 00:02:09.166 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:09.166 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:09.166 LINK zipf 00:02:09.166 LINK event_perf 00:02:09.166 LINK vtophys 00:02:09.166 LINK pmr_persistence 00:02:09.166 LINK histogram_perf 00:02:09.166 LINK connect_stress 00:02:09.166 LINK reactor 00:02:09.166 LINK jsoncat 00:02:09.166 LINK poller_perf 00:02:09.166 LINK stub 00:02:09.166 LINK boot_partition 00:02:09.166 LINK led 00:02:09.166 CXX test/cpp_headers/sock.o 00:02:09.166 LINK startup 00:02:09.166 LINK err_injection 00:02:09.166 CXX test/cpp_headers/stdinc.o 00:02:09.166 CXX test/cpp_headers/string.o 00:02:09.166 CXX test/cpp_headers/thread.o 00:02:09.166 LINK env_dpdk_post_init 00:02:09.166 LINK mkfs 00:02:09.166 CXX test/cpp_headers/trace.o 00:02:09.166 LINK app_repeat 00:02:09.166 CXX test/cpp_headers/trace_parser.o 00:02:09.166 CXX test/cpp_headers/tree.o 00:02:09.166 CXX test/cpp_headers/ublk.o 00:02:09.166 CXX test/cpp_headers/util.o 00:02:09.166 CXX test/cpp_headers/uuid.o 00:02:09.166 CXX test/cpp_headers/version.o 00:02:09.166 CXX test/cpp_headers/vfio_user_pci.o 00:02:09.166 CXX test/cpp_headers/vfio_user_spec.o 00:02:09.166 CXX test/cpp_headers/vhost.o 00:02:09.166 LINK cmb_copy 00:02:09.166 CXX test/cpp_headers/vmd.o 00:02:09.166 CXX test/cpp_headers/xor.o 00:02:09.166 LINK hello_sock 00:02:09.166 LINK scheduler 00:02:09.166 CXX test/cpp_headers/zipf.o 00:02:09.166 LINK verify 00:02:09.166 LINK spdk_dd 00:02:09.166 LINK hello_world 00:02:09.166 LINK doorbell_aers 00:02:09.426 LINK ioat_perf 00:02:09.426 LINK simple_copy 00:02:09.426 LINK nvme_dp 00:02:09.426 LINK sgl 00:02:09.426 LINK reserve 00:02:09.426 LINK reset 00:02:09.426 LINK fused_ordering 00:02:09.426 LINK thread 00:02:09.426 LINK hello_blob 00:02:09.426 LINK hotplug 00:02:09.426 LINK overhead 00:02:09.426 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:09.426 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:09.426 LINK bdev_svc 00:02:09.426 LINK hello_bdev 00:02:09.426 LINK aer 00:02:09.426 LINK fdp 00:02:09.426 LINK spdk_trace 00:02:09.426 LINK nvmf 00:02:09.426 LINK nvme_compliance 00:02:09.426 LINK abort 00:02:09.426 LINK arbitration 00:02:09.426 LINK reconnect 00:02:09.426 LINK bdevio 00:02:09.426 LINK pci_ut 00:02:09.426 LINK test_dma 00:02:09.426 LINK idxd_perf 00:02:09.426 LINK dif 00:02:09.688 LINK accel_perf 00:02:09.688 LINK nvme_manage 00:02:09.688 LINK blobcli 00:02:09.688 LINK nvme_fuzz 00:02:09.688 LINK spdk_nvme_perf 00:02:09.688 LINK spdk_nvme 00:02:09.688 LINK spdk_nvme_identify 00:02:09.950 LINK spdk_bdev 00:02:09.950 LINK bdevperf 00:02:09.950 LINK vhost_fuzz 00:02:09.950 LINK mem_callbacks 00:02:09.950 LINK spdk_top 00:02:09.950 LINK memory_ut 00:02:09.950 LINK cuse 00:02:10.523 LINK iscsi_fuzz 00:02:13.075 LINK esnap 00:02:13.338 00:02:13.338 real 0m51.777s 00:02:13.338 user 6m54.096s 00:02:13.338 sys 6m50.903s 00:02:13.338 10:48:09 -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:13.338 10:48:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.338 ************************************ 00:02:13.338 END TEST make 00:02:13.338 ************************************ 00:02:13.600 10:48:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:13.600 10:48:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:13.600 10:48:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:13.600 10:48:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.600 10:48:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:13.600 10:48:10 -- pm/common@44 -- $ pid=7332 00:02:13.600 10:48:10 -- pm/common@50 -- $ kill -TERM 7332 00:02:13.600 10:48:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.600 10:48:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:13.600 10:48:10 -- pm/common@44 -- $ pid=7333 00:02:13.600 10:48:10 -- pm/common@50 -- $ kill -TERM 7333 00:02:13.600 10:48:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.600 10:48:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:13.600 10:48:10 -- pm/common@44 -- $ pid=7335 00:02:13.600 10:48:10 -- pm/common@50 -- $ kill -TERM 7335 00:02:13.600 10:48:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.600 10:48:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:13.600 10:48:10 -- pm/common@44 -- $ pid=7362 00:02:13.600 10:48:10 -- pm/common@50 -- $ sudo -E kill -TERM 7362 00:02:13.600 10:48:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:13.600 10:48:10 -- nvmf/common.sh@7 -- # uname -s 00:02:13.600 10:48:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:13.600 10:48:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:13.600 10:48:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:13.600 10:48:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:13.600 10:48:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:13.600 10:48:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:13.600 10:48:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:13.600 10:48:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:13.600 10:48:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:13.600 10:48:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:13.600 10:48:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:13.600 10:48:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:13.600 10:48:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:13.600 10:48:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:13.600 10:48:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:13.600 10:48:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:13.600 10:48:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:13.600 10:48:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:13.600 10:48:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.600 10:48:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.600 10:48:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.600 10:48:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.601 10:48:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.601 10:48:10 -- paths/export.sh@5 -- # export PATH 00:02:13.601 10:48:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.601 10:48:10 -- nvmf/common.sh@47 -- # : 0 00:02:13.601 10:48:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:13.601 10:48:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:13.601 10:48:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:13.601 10:48:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:13.601 10:48:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:13.601 10:48:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:13.601 10:48:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:13.601 10:48:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:13.601 10:48:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:13.601 10:48:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:13.601 10:48:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:13.601 10:48:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:13.601 10:48:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:13.601 10:48:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:13.601 10:48:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:13.601 10:48:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:13.864 10:48:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:13.864 10:48:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:13.864 10:48:10 -- spdk/autotest.sh@48 -- # udevadm_pid=71166 00:02:13.864 10:48:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:13.864 10:48:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:13.864 10:48:10 -- pm/common@17 -- # local monitor 00:02:13.864 10:48:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.864 10:48:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.864 10:48:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.864 10:48:10 -- pm/common@21 -- # date +%s 00:02:13.864 10:48:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.864 10:48:10 -- pm/common@25 -- # sleep 1 00:02:13.864 10:48:10 -- pm/common@21 -- # date +%s 00:02:13.864 10:48:10 -- pm/common@21 -- # date +%s 00:02:13.864 10:48:10 -- pm/common@21 -- # date +%s 00:02:13.864 10:48:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762890 00:02:13.864 10:48:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762890 00:02:13.864 10:48:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762890 00:02:13.864 10:48:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762890 00:02:13.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762890_collect-vmstat.pm.log 00:02:13.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762890_collect-cpu-load.pm.log 00:02:13.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762890_collect-cpu-temp.pm.log 00:02:13.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762890_collect-bmc-pm.bmc.pm.log 00:02:14.812 10:48:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:14.812 10:48:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:14.812 10:48:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:14.812 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:02:14.812 10:48:11 -- spdk/autotest.sh@59 -- # create_test_list 00:02:14.812 10:48:11 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:14.812 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:02:14.812 10:48:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:14.812 10:48:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.812 10:48:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.812 10:48:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:14.812 10:48:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.812 10:48:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:14.812 10:48:11 -- common/autotest_common.sh@1451 -- # uname 00:02:14.812 10:48:11 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:14.812 10:48:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:14.812 10:48:11 -- common/autotest_common.sh@1471 -- # uname 00:02:14.812 10:48:11 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:14.812 10:48:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:14.812 10:48:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:14.812 10:48:11 -- spdk/autotest.sh@72 -- # hash lcov 00:02:14.812 10:48:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:14.812 10:48:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:14.812 --rc lcov_branch_coverage=1 00:02:14.812 --rc lcov_function_coverage=1 00:02:14.812 --rc genhtml_branch_coverage=1 00:02:14.812 --rc genhtml_function_coverage=1 00:02:14.812 --rc genhtml_legend=1 00:02:14.812 --rc geninfo_all_blocks=1 00:02:14.812 ' 00:02:14.812 10:48:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:14.812 --rc lcov_branch_coverage=1 00:02:14.812 --rc lcov_function_coverage=1 00:02:14.812 --rc genhtml_branch_coverage=1 00:02:14.812 --rc genhtml_function_coverage=1 00:02:14.812 --rc genhtml_legend=1 00:02:14.812 --rc geninfo_all_blocks=1 00:02:14.812 ' 00:02:14.812 10:48:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:14.812 --rc lcov_branch_coverage=1 00:02:14.812 --rc lcov_function_coverage=1 00:02:14.812 --rc genhtml_branch_coverage=1 00:02:14.812 --rc genhtml_function_coverage=1 00:02:14.812 --rc genhtml_legend=1 00:02:14.812 --rc geninfo_all_blocks=1 00:02:14.812 --no-external' 00:02:14.812 10:48:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:14.812 --rc lcov_branch_coverage=1 00:02:14.812 --rc lcov_function_coverage=1 00:02:14.812 --rc genhtml_branch_coverage=1 00:02:14.812 --rc genhtml_function_coverage=1 00:02:14.812 --rc genhtml_legend=1 00:02:14.812 --rc geninfo_all_blocks=1 00:02:14.812 --no-external' 00:02:14.812 10:48:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:15.073 lcov: LCOV version 1.14 00:02:15.073 10:48:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:27.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:27.315 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:27.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:27.315 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:27.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:27.315 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:27.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:27.315 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:42.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:42.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:42.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:42.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:42.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:42.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:42.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:42.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:42.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:42.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:42.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:42.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:42.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:42.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:44.152 10:48:40 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:44.152 10:48:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:44.152 10:48:40 -- common/autotest_common.sh@10 -- # set +x 00:02:44.152 10:48:40 -- spdk/autotest.sh@91 -- # rm -f 00:02:44.152 10:48:40 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.457 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:47.457 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:47.457 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:47.718 10:48:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:47.718 10:48:44 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:47.718 10:48:44 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:47.718 10:48:44 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:47.718 10:48:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:47.718 10:48:44 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:47.718 10:48:44 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:47.718 10:48:44 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.718 10:48:44 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:47.718 10:48:44 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:47.718 10:48:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:47.718 10:48:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:47.718 10:48:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:47.718 10:48:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:47.718 10:48:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:47.718 No valid GPT data, bailing 00:02:47.718 10:48:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:47.718 10:48:44 -- scripts/common.sh@391 -- # pt= 00:02:47.718 10:48:44 -- scripts/common.sh@392 -- # return 1 00:02:47.719 10:48:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:47.980 1+0 records in 00:02:47.980 1+0 records out 00:02:47.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435272 s, 241 MB/s 00:02:47.980 10:48:44 -- spdk/autotest.sh@118 -- # sync 00:02:47.980 10:48:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:47.980 10:48:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:47.980 10:48:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:56.128 10:48:52 -- spdk/autotest.sh@124 -- # uname -s 00:02:56.128 10:48:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:56.128 10:48:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:56.128 10:48:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.128 10:48:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.128 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:02:56.128 ************************************ 00:02:56.128 START TEST setup.sh 00:02:56.128 ************************************ 00:02:56.128 10:48:52 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:56.128 * Looking for test storage... 00:02:56.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.128 10:48:52 -- setup/test-setup.sh@10 -- # uname -s 00:02:56.128 10:48:52 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:56.128 10:48:52 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:56.128 10:48:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.128 10:48:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.128 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:02:56.128 ************************************ 00:02:56.128 START TEST acl 00:02:56.128 ************************************ 00:02:56.128 10:48:52 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:56.128 * Looking for test storage... 00:02:56.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.128 10:48:52 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:56.128 10:48:52 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:56.128 10:48:52 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:56.128 10:48:52 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:56.129 10:48:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:56.129 10:48:52 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:56.129 10:48:52 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:56.129 10:48:52 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.129 10:48:52 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:56.129 10:48:52 -- setup/acl.sh@12 -- # devs=() 00:02:56.129 10:48:52 -- setup/acl.sh@12 -- # declare -a devs 00:02:56.129 10:48:52 -- setup/acl.sh@13 -- # drivers=() 00:02:56.129 10:48:52 -- setup/acl.sh@13 -- # declare -A drivers 00:02:56.129 10:48:52 -- setup/acl.sh@51 -- # setup reset 00:02:56.129 10:48:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.129 10:48:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.339 10:48:56 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:00.339 10:48:56 -- setup/acl.sh@16 -- # local dev driver 00:03:00.339 10:48:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.339 10:48:56 -- setup/acl.sh@15 -- # setup output status 00:03:00.339 10:48:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.339 10:48:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:03.644 Hugepages 00:03:03.644 node hugesize free / total 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 00:03:03.644 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:03.644 10:49:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.644 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.644 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.644 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.645 10:49:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:03.645 10:49:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:03.645 10:49:00 -- setup/acl.sh@20 -- # continue 00:03:03.645 10:49:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.645 10:49:00 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:03.645 10:49:00 -- setup/acl.sh@54 -- # run_test denied denied 00:03:03.645 10:49:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:03.645 10:49:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:03.645 10:49:00 -- common/autotest_common.sh@10 -- # set +x 00:03:03.645 ************************************ 00:03:03.645 START TEST denied 00:03:03.645 ************************************ 00:03:03.645 10:49:00 -- common/autotest_common.sh@1121 -- # denied 00:03:03.645 10:49:00 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:03.906 10:49:00 -- setup/acl.sh@38 -- # setup output config 00:03:03.906 10:49:00 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:03.906 10:49:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.906 10:49:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.117 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:08.117 10:49:04 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:08.117 10:49:04 -- setup/acl.sh@28 -- # local dev driver 00:03:08.117 10:49:04 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:08.117 10:49:04 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:08.117 10:49:04 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:08.117 10:49:04 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:08.117 10:49:04 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:08.117 10:49:04 -- setup/acl.sh@41 -- # setup reset 00:03:08.117 10:49:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.117 10:49:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.416 00:03:13.416 real 0m8.939s 00:03:13.416 user 0m3.079s 00:03:13.416 sys 0m5.039s 00:03:13.416 10:49:09 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:13.416 10:49:09 -- common/autotest_common.sh@10 -- # set +x 00:03:13.416 ************************************ 00:03:13.416 END TEST denied 00:03:13.416 ************************************ 00:03:13.416 10:49:09 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:13.416 10:49:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:13.416 10:49:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:13.416 10:49:09 -- common/autotest_common.sh@10 -- # set +x 00:03:13.416 ************************************ 00:03:13.416 START TEST allowed 00:03:13.416 ************************************ 00:03:13.416 10:49:09 -- common/autotest_common.sh@1121 -- # allowed 00:03:13.416 10:49:09 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:13.416 10:49:09 -- setup/acl.sh@45 -- # setup output config 00:03:13.416 10:49:09 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:13.416 10:49:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.416 10:49:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.711 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:18.711 10:49:14 -- setup/acl.sh@47 -- # verify 00:03:18.711 10:49:14 -- setup/acl.sh@28 -- # local dev driver 00:03:18.711 10:49:14 -- setup/acl.sh@48 -- # setup reset 00:03:18.711 10:49:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.711 10:49:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.923 00:03:22.923 real 0m9.676s 00:03:22.923 user 0m2.879s 00:03:22.923 sys 0m5.048s 00:03:22.923 10:49:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.923 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:03:22.923 ************************************ 00:03:22.923 END TEST allowed 00:03:22.923 ************************************ 00:03:22.923 00:03:22.923 real 0m26.409s 00:03:22.923 user 0m8.903s 00:03:22.923 sys 0m15.141s 00:03:22.923 10:49:19 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.923 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:03:22.923 ************************************ 00:03:22.923 END TEST acl 00:03:22.923 ************************************ 00:03:22.923 10:49:19 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:22.923 10:49:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.923 10:49:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.923 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:03:22.923 ************************************ 00:03:22.923 START TEST hugepages 00:03:22.923 ************************************ 00:03:22.923 10:49:19 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:22.923 * Looking for test storage... 00:03:22.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:22.923 10:49:19 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:22.923 10:49:19 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:22.923 10:49:19 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:22.923 10:49:19 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:22.923 10:49:19 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:22.923 10:49:19 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:22.923 10:49:19 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:22.923 10:49:19 -- setup/common.sh@18 -- # local node= 00:03:22.923 10:49:19 -- setup/common.sh@19 -- # local var val 00:03:22.923 10:49:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.923 10:49:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.923 10:49:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.923 10:49:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.923 10:49:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.923 10:49:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109055212 kB' 'MemAvailable: 112295988 kB' 'Buffers: 12152 kB' 'Cached: 8828420 kB' 'SwapCached: 0 kB' 'Active: 6139180 kB' 'Inactive: 3404296 kB' 'Active(anon): 5595340 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706284 kB' 'Mapped: 144644 kB' 'Shmem: 4892436 kB' 'KReclaimable: 233944 kB' 'Slab: 771244 kB' 'SReclaimable: 233944 kB' 'SUnreclaim: 537300 kB' 'KernelStack: 26832 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 8451260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230980 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 10:49:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # continue 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 10:49:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 10:49:19 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.924 10:49:19 -- setup/common.sh@33 -- # echo 2048 00:03:22.924 10:49:19 -- setup/common.sh@33 -- # return 0 00:03:22.924 10:49:19 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:22.924 10:49:19 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:22.924 10:49:19 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:22.924 10:49:19 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:22.924 10:49:19 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:22.924 10:49:19 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:22.924 10:49:19 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:22.924 10:49:19 -- setup/hugepages.sh@207 -- # get_nodes 00:03:22.924 10:49:19 -- setup/hugepages.sh@27 -- # local node 00:03:22.924 10:49:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.924 10:49:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:22.924 10:49:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.924 10:49:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.924 10:49:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.924 10:49:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.924 10:49:19 -- setup/hugepages.sh@208 -- # clear_hp 00:03:22.925 10:49:19 -- setup/hugepages.sh@37 -- # local node hp 00:03:22.925 10:49:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.925 10:49:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.925 10:49:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:22.925 10:49:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.925 10:49:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:22.925 10:49:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.925 10:49:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.925 10:49:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:22.925 10:49:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.925 10:49:19 -- setup/hugepages.sh@41 -- # echo 0 00:03:22.925 10:49:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:22.925 10:49:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:22.925 10:49:19 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:22.925 10:49:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.925 10:49:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.925 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:03:22.925 ************************************ 00:03:22.925 START TEST default_setup 00:03:22.925 ************************************ 00:03:22.925 10:49:19 -- common/autotest_common.sh@1121 -- # default_setup 00:03:22.925 10:49:19 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:22.925 10:49:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.925 10:49:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:22.925 10:49:19 -- setup/hugepages.sh@51 -- # shift 00:03:22.925 10:49:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:22.925 10:49:19 -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.925 10:49:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.925 10:49:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.925 10:49:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:22.925 10:49:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:22.925 10:49:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.925 10:49:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.925 10:49:19 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.925 10:49:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.925 10:49:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.925 10:49:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:22.925 10:49:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.925 10:49:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:22.925 10:49:19 -- setup/hugepages.sh@73 -- # return 0 00:03:22.925 10:49:19 -- setup/hugepages.sh@137 -- # setup output 00:03:22.925 10:49:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.925 10:49:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.227 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:26.227 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:26.488 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:26.488 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:26.488 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:26.488 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:26.753 10:49:23 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:26.753 10:49:23 -- setup/hugepages.sh@89 -- # local node 00:03:26.753 10:49:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.753 10:49:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.753 10:49:23 -- setup/hugepages.sh@92 -- # local surp 00:03:26.753 10:49:23 -- setup/hugepages.sh@93 -- # local resv 00:03:26.753 10:49:23 -- setup/hugepages.sh@94 -- # local anon 00:03:26.753 10:49:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.753 10:49:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.753 10:49:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.753 10:49:23 -- setup/common.sh@18 -- # local node= 00:03:26.753 10:49:23 -- setup/common.sh@19 -- # local var val 00:03:26.753 10:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.753 10:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.753 10:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.753 10:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.753 10:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.753 10:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.753 10:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111240352 kB' 'MemAvailable: 114480536 kB' 'Buffers: 12152 kB' 'Cached: 8828536 kB' 'SwapCached: 0 kB' 'Active: 6159884 kB' 'Inactive: 3404296 kB' 'Active(anon): 5616044 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 726592 kB' 'Mapped: 144948 kB' 'Shmem: 4892552 kB' 'KReclaimable: 232760 kB' 'Slab: 767632 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 534872 kB' 'KernelStack: 26928 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8468928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231076 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.753 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.753 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.754 10:49:23 -- setup/common.sh@33 -- # echo 0 00:03:26.754 10:49:23 -- setup/common.sh@33 -- # return 0 00:03:26.754 10:49:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.754 10:49:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.754 10:49:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.754 10:49:23 -- setup/common.sh@18 -- # local node= 00:03:26.754 10:49:23 -- setup/common.sh@19 -- # local var val 00:03:26.754 10:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.754 10:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.754 10:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.754 10:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.754 10:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.754 10:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111243528 kB' 'MemAvailable: 114483712 kB' 'Buffers: 12152 kB' 'Cached: 8828536 kB' 'SwapCached: 0 kB' 'Active: 6159096 kB' 'Inactive: 3404296 kB' 'Active(anon): 5615256 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 725864 kB' 'Mapped: 144868 kB' 'Shmem: 4892552 kB' 'KReclaimable: 232760 kB' 'Slab: 767584 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 534824 kB' 'KernelStack: 26832 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8468940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231124 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.754 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.754 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.755 10:49:23 -- setup/common.sh@33 -- # echo 0 00:03:26.755 10:49:23 -- setup/common.sh@33 -- # return 0 00:03:26.755 10:49:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.755 10:49:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.755 10:49:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.755 10:49:23 -- setup/common.sh@18 -- # local node= 00:03:26.755 10:49:23 -- setup/common.sh@19 -- # local var val 00:03:26.755 10:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.755 10:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.755 10:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.755 10:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.755 10:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.755 10:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111242864 kB' 'MemAvailable: 114483048 kB' 'Buffers: 12152 kB' 'Cached: 8828548 kB' 'SwapCached: 0 kB' 'Active: 6158948 kB' 'Inactive: 3404296 kB' 'Active(anon): 5615108 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 725924 kB' 'Mapped: 144852 kB' 'Shmem: 4892564 kB' 'KReclaimable: 232760 kB' 'Slab: 767584 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 534824 kB' 'KernelStack: 26880 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8468588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231252 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.755 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.755 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.756 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.756 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.756 10:49:23 -- setup/common.sh@33 -- # echo 0 00:03:26.756 10:49:23 -- setup/common.sh@33 -- # return 0 00:03:26.756 10:49:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.756 10:49:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.756 nr_hugepages=1024 00:03:26.756 10:49:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.756 resv_hugepages=0 00:03:26.756 10:49:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.757 surplus_hugepages=0 00:03:26.757 10:49:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.757 anon_hugepages=0 00:03:26.757 10:49:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.757 10:49:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.757 10:49:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.757 10:49:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.757 10:49:23 -- setup/common.sh@18 -- # local node= 00:03:26.757 10:49:23 -- setup/common.sh@19 -- # local var val 00:03:26.757 10:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.757 10:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.757 10:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.757 10:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.757 10:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.757 10:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111243280 kB' 'MemAvailable: 114483464 kB' 'Buffers: 12152 kB' 'Cached: 8828564 kB' 'SwapCached: 0 kB' 'Active: 6159644 kB' 'Inactive: 3404296 kB' 'Active(anon): 5615804 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 726608 kB' 'Mapped: 144852 kB' 'Shmem: 4892580 kB' 'KReclaimable: 232760 kB' 'Slab: 767584 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 534824 kB' 'KernelStack: 27008 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8485696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231236 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.757 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.757 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # continue 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.758 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.758 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.758 10:49:23 -- setup/common.sh@33 -- # echo 1024 00:03:26.758 10:49:23 -- setup/common.sh@33 -- # return 0 00:03:26.758 10:49:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.758 10:49:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.758 10:49:23 -- setup/hugepages.sh@27 -- # local node 00:03:26.758 10:49:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.758 10:49:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.758 10:49:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.758 10:49:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.758 10:49:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.758 10:49:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.758 10:49:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.758 10:49:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.758 10:49:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.758 10:49:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.758 10:49:23 -- setup/common.sh@18 -- # local node=0 00:03:26.758 10:49:23 -- setup/common.sh@19 -- # local var val 00:03:27.021 10:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.021 10:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.021 10:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.021 10:49:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.021 10:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.021 10:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53548228 kB' 'MemUsed: 12110780 kB' 'SwapCached: 0 kB' 'Active: 4906648 kB' 'Inactive: 3243776 kB' 'Active(anon): 4502300 kB' 'Inactive(anon): 0 kB' 'Active(file): 404348 kB' 'Inactive(file): 3243776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7935000 kB' 'Mapped: 121944 kB' 'AnonPages: 218420 kB' 'Shmem: 4286876 kB' 'KernelStack: 14408 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177428 kB' 'Slab: 524688 kB' 'SReclaimable: 177428 kB' 'SUnreclaim: 347260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.021 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.022 10:49:23 -- setup/common.sh@32 -- # continue 00:03:27.022 10:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 10:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 10:49:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.022 10:49:23 -- setup/common.sh@33 -- # echo 0 00:03:27.022 10:49:23 -- setup/common.sh@33 -- # return 0 00:03:27.022 10:49:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.022 10:49:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.022 10:49:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.022 10:49:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.022 10:49:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.022 node0=1024 expecting 1024 00:03:27.022 10:49:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.022 00:03:27.022 real 0m4.104s 00:03:27.022 user 0m1.643s 00:03:27.022 sys 0m2.486s 00:03:27.022 10:49:23 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.022 10:49:23 -- common/autotest_common.sh@10 -- # set +x 00:03:27.022 ************************************ 00:03:27.022 END TEST default_setup 00:03:27.022 ************************************ 00:03:27.022 10:49:23 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:27.022 10:49:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:27.022 10:49:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.022 10:49:23 -- common/autotest_common.sh@10 -- # set +x 00:03:27.022 ************************************ 00:03:27.022 START TEST per_node_1G_alloc 00:03:27.022 ************************************ 00:03:27.022 10:49:23 -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:27.022 10:49:23 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:27.022 10:49:23 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:27.022 10:49:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.022 10:49:23 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:27.022 10:49:23 -- setup/hugepages.sh@51 -- # shift 00:03:27.022 10:49:23 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:27.022 10:49:23 -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.022 10:49:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.022 10:49:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.022 10:49:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:27.022 10:49:23 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:27.022 10:49:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.022 10:49:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.022 10:49:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.022 10:49:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.022 10:49:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.022 10:49:23 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:27.022 10:49:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.022 10:49:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.022 10:49:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.022 10:49:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.022 10:49:23 -- setup/hugepages.sh@73 -- # return 0 00:03:27.022 10:49:23 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:27.022 10:49:23 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:27.022 10:49:23 -- setup/hugepages.sh@146 -- # setup output 00:03:27.022 10:49:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.022 10:49:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.326 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:30.326 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:30.326 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:30.589 10:49:27 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:30.589 10:49:27 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:30.589 10:49:27 -- setup/hugepages.sh@89 -- # local node 00:03:30.589 10:49:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.589 10:49:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.589 10:49:27 -- setup/hugepages.sh@92 -- # local surp 00:03:30.589 10:49:27 -- setup/hugepages.sh@93 -- # local resv 00:03:30.589 10:49:27 -- setup/hugepages.sh@94 -- # local anon 00:03:30.589 10:49:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.589 10:49:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.589 10:49:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.589 10:49:27 -- setup/common.sh@18 -- # local node= 00:03:30.589 10:49:27 -- setup/common.sh@19 -- # local var val 00:03:30.589 10:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.589 10:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.589 10:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.589 10:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.589 10:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.589 10:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.589 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.589 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.589 10:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111259100 kB' 'MemAvailable: 114499284 kB' 'Buffers: 12152 kB' 'Cached: 8828676 kB' 'SwapCached: 0 kB' 'Active: 6162000 kB' 'Inactive: 3404296 kB' 'Active(anon): 5618160 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 728560 kB' 'Mapped: 143680 kB' 'Shmem: 4892692 kB' 'KReclaimable: 232760 kB' 'Slab: 767872 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535112 kB' 'KernelStack: 27008 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8456564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231380 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:30.589 10:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.589 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.589 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.589 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 10:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 10:49:27 -- setup/common.sh@33 -- # echo 0 00:03:30.590 10:49:27 -- setup/common.sh@33 -- # return 0 00:03:30.590 10:49:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.590 10:49:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.590 10:49:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.590 10:49:27 -- setup/common.sh@18 -- # local node= 00:03:30.590 10:49:27 -- setup/common.sh@19 -- # local var val 00:03:30.590 10:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.590 10:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.590 10:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.590 10:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.590 10:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.590 10:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.590 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111259068 kB' 'MemAvailable: 114499252 kB' 'Buffers: 12152 kB' 'Cached: 8828680 kB' 'SwapCached: 0 kB' 'Active: 6161656 kB' 'Inactive: 3404296 kB' 'Active(anon): 5617816 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 728524 kB' 'Mapped: 143652 kB' 'Shmem: 4892696 kB' 'KReclaimable: 232760 kB' 'Slab: 767872 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535112 kB' 'KernelStack: 27024 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8456572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231380 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.591 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:49:27 -- setup/common.sh@33 -- # echo 0 00:03:30.858 10:49:27 -- setup/common.sh@33 -- # return 0 00:03:30.858 10:49:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.858 10:49:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.858 10:49:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.858 10:49:27 -- setup/common.sh@18 -- # local node= 00:03:30.858 10:49:27 -- setup/common.sh@19 -- # local var val 00:03:30.858 10:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.858 10:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.858 10:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.858 10:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.858 10:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.858 10:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111260420 kB' 'MemAvailable: 114500604 kB' 'Buffers: 12152 kB' 'Cached: 8828688 kB' 'SwapCached: 0 kB' 'Active: 6161828 kB' 'Inactive: 3404296 kB' 'Active(anon): 5617988 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 728148 kB' 'Mapped: 143660 kB' 'Shmem: 4892704 kB' 'KReclaimable: 232760 kB' 'Slab: 767904 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535144 kB' 'KernelStack: 26880 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8454952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231380 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:49:27 -- setup/common.sh@33 -- # echo 0 00:03:30.859 10:49:27 -- setup/common.sh@33 -- # return 0 00:03:30.859 10:49:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.859 10:49:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.859 nr_hugepages=1024 00:03:30.859 10:49:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.859 resv_hugepages=0 00:03:30.859 10:49:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.859 surplus_hugepages=0 00:03:30.859 10:49:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.859 anon_hugepages=0 00:03:30.859 10:49:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.859 10:49:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.859 10:49:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.859 10:49:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.859 10:49:27 -- setup/common.sh@18 -- # local node= 00:03:30.859 10:49:27 -- setup/common.sh@19 -- # local var val 00:03:30.859 10:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.859 10:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.859 10:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.859 10:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.859 10:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.859 10:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111262412 kB' 'MemAvailable: 114502612 kB' 'Buffers: 12152 kB' 'Cached: 8828708 kB' 'SwapCached: 0 kB' 'Active: 6162108 kB' 'Inactive: 3404296 kB' 'Active(anon): 5618268 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 729000 kB' 'Mapped: 143652 kB' 'Shmem: 4892724 kB' 'KReclaimable: 232792 kB' 'Slab: 767944 kB' 'SReclaimable: 232792 kB' 'SUnreclaim: 535152 kB' 'KernelStack: 26832 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8453700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231252 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.861 10:49:27 -- setup/common.sh@33 -- # echo 1024 00:03:30.861 10:49:27 -- setup/common.sh@33 -- # return 0 00:03:30.861 10:49:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.861 10:49:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.861 10:49:27 -- setup/hugepages.sh@27 -- # local node 00:03:30.861 10:49:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.861 10:49:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.861 10:49:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.861 10:49:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.861 10:49:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.861 10:49:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.861 10:49:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.861 10:49:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.861 10:49:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.861 10:49:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.861 10:49:27 -- setup/common.sh@18 -- # local node=0 00:03:30.861 10:49:27 -- setup/common.sh@19 -- # local var val 00:03:30.861 10:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.861 10:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.861 10:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.861 10:49:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.861 10:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.861 10:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54604416 kB' 'MemUsed: 11054592 kB' 'SwapCached: 0 kB' 'Active: 4902632 kB' 'Inactive: 3243776 kB' 'Active(anon): 4498284 kB' 'Inactive(anon): 0 kB' 'Active(file): 404348 kB' 'Inactive(file): 3243776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7935004 kB' 'Mapped: 120736 kB' 'AnonPages: 214568 kB' 'Shmem: 4286880 kB' 'KernelStack: 14280 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177428 kB' 'Slab: 524720 kB' 'SReclaimable: 177428 kB' 'SUnreclaim: 347292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.861 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.861 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@33 -- # echo 0 00:03:30.862 10:49:27 -- setup/common.sh@33 -- # return 0 00:03:30.862 10:49:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.862 10:49:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.862 10:49:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.862 10:49:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.862 10:49:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.862 10:49:27 -- setup/common.sh@18 -- # local node=1 00:03:30.862 10:49:27 -- setup/common.sh@19 -- # local var val 00:03:30.862 10:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.862 10:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.862 10:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.862 10:49:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.862 10:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.862 10:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 56658684 kB' 'MemUsed: 4021192 kB' 'SwapCached: 0 kB' 'Active: 1258432 kB' 'Inactive: 160520 kB' 'Active(anon): 1118940 kB' 'Inactive(anon): 0 kB' 'Active(file): 139492 kB' 'Inactive(file): 160520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 905884 kB' 'Mapped: 22916 kB' 'AnonPages: 513260 kB' 'Shmem: 605872 kB' 'KernelStack: 12504 kB' 'PageTables: 3500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 55364 kB' 'Slab: 243224 kB' 'SReclaimable: 55364 kB' 'SUnreclaim: 187860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.862 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.862 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # continue 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.863 10:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.863 10:49:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.863 10:49:27 -- setup/common.sh@33 -- # echo 0 00:03:30.863 10:49:27 -- setup/common.sh@33 -- # return 0 00:03:30.863 10:49:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.863 10:49:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.863 10:49:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.863 10:49:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.863 node0=512 expecting 512 00:03:30.863 10:49:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.863 10:49:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.863 10:49:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.863 10:49:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:30.863 node1=512 expecting 512 00:03:30.863 10:49:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.863 00:03:30.863 real 0m3.868s 00:03:30.863 user 0m1.602s 00:03:30.863 sys 0m2.326s 00:03:30.863 10:49:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.863 10:49:27 -- common/autotest_common.sh@10 -- # set +x 00:03:30.863 ************************************ 00:03:30.863 END TEST per_node_1G_alloc 00:03:30.863 ************************************ 00:03:30.863 10:49:27 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:30.863 10:49:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.863 10:49:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.863 10:49:27 -- common/autotest_common.sh@10 -- # set +x 00:03:30.863 ************************************ 00:03:30.863 START TEST even_2G_alloc 00:03:30.863 ************************************ 00:03:30.863 10:49:27 -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:30.863 10:49:27 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:30.863 10:49:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.863 10:49:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.863 10:49:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.863 10:49:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.863 10:49:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.863 10:49:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.863 10:49:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.863 10:49:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.863 10:49:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.863 10:49:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.863 10:49:27 -- setup/hugepages.sh@83 -- # : 512 00:03:30.863 10:49:27 -- setup/hugepages.sh@84 -- # : 1 00:03:30.863 10:49:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.863 10:49:27 -- setup/hugepages.sh@83 -- # : 0 00:03:30.863 10:49:27 -- setup/hugepages.sh@84 -- # : 0 00:03:30.863 10:49:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.863 10:49:27 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:30.863 10:49:27 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:30.863 10:49:27 -- setup/hugepages.sh@153 -- # setup output 00:03:30.863 10:49:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.863 10:49:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.169 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:34.169 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.169 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.697 10:49:31 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:34.697 10:49:31 -- setup/hugepages.sh@89 -- # local node 00:03:34.697 10:49:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.697 10:49:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.697 10:49:31 -- setup/hugepages.sh@92 -- # local surp 00:03:34.697 10:49:31 -- setup/hugepages.sh@93 -- # local resv 00:03:34.697 10:49:31 -- setup/hugepages.sh@94 -- # local anon 00:03:34.697 10:49:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.697 10:49:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.697 10:49:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.697 10:49:31 -- setup/common.sh@18 -- # local node= 00:03:34.697 10:49:31 -- setup/common.sh@19 -- # local var val 00:03:34.697 10:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.697 10:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.697 10:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.697 10:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.697 10:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.697 10:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111266904 kB' 'MemAvailable: 114507104 kB' 'Buffers: 12152 kB' 'Cached: 8828816 kB' 'SwapCached: 0 kB' 'Active: 6165980 kB' 'Inactive: 3404296 kB' 'Active(anon): 5622140 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 732224 kB' 'Mapped: 143696 kB' 'Shmem: 4892832 kB' 'KReclaimable: 232792 kB' 'Slab: 768180 kB' 'SReclaimable: 232792 kB' 'SUnreclaim: 535388 kB' 'KernelStack: 26800 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8454444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231220 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:49:31 -- setup/common.sh@33 -- # echo 0 00:03:34.698 10:49:31 -- setup/common.sh@33 -- # return 0 00:03:34.698 10:49:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.698 10:49:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.698 10:49:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.698 10:49:31 -- setup/common.sh@18 -- # local node= 00:03:34.698 10:49:31 -- setup/common.sh@19 -- # local var val 00:03:34.698 10:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.698 10:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.698 10:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.698 10:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.698 10:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.698 10:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111268416 kB' 'MemAvailable: 114508600 kB' 'Buffers: 12152 kB' 'Cached: 8828820 kB' 'SwapCached: 0 kB' 'Active: 6165068 kB' 'Inactive: 3404296 kB' 'Active(anon): 5621228 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 731812 kB' 'Mapped: 143676 kB' 'Shmem: 4892836 kB' 'KReclaimable: 232760 kB' 'Slab: 768160 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535400 kB' 'KernelStack: 26800 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8454456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231204 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.698 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.698 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.699 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:49:31 -- setup/common.sh@33 -- # echo 0 00:03:34.700 10:49:31 -- setup/common.sh@33 -- # return 0 00:03:34.700 10:49:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.700 10:49:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.700 10:49:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.700 10:49:31 -- setup/common.sh@18 -- # local node= 00:03:34.700 10:49:31 -- setup/common.sh@19 -- # local var val 00:03:34.700 10:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.700 10:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.700 10:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.700 10:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.700 10:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.700 10:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111268416 kB' 'MemAvailable: 114508600 kB' 'Buffers: 12152 kB' 'Cached: 8828832 kB' 'SwapCached: 0 kB' 'Active: 6165016 kB' 'Inactive: 3404296 kB' 'Active(anon): 5621176 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 731748 kB' 'Mapped: 143676 kB' 'Shmem: 4892848 kB' 'KReclaimable: 232760 kB' 'Slab: 768160 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535400 kB' 'KernelStack: 26816 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8454468 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231204 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.700 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:49:31 -- setup/common.sh@33 -- # echo 0 00:03:34.701 10:49:31 -- setup/common.sh@33 -- # return 0 00:03:34.701 10:49:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.701 10:49:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.701 nr_hugepages=1024 00:03:34.701 10:49:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.701 resv_hugepages=0 00:03:34.701 10:49:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.701 surplus_hugepages=0 00:03:34.701 10:49:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.701 anon_hugepages=0 00:03:34.701 10:49:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.701 10:49:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.701 10:49:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.701 10:49:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.701 10:49:31 -- setup/common.sh@18 -- # local node= 00:03:34.701 10:49:31 -- setup/common.sh@19 -- # local var val 00:03:34.701 10:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.701 10:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.701 10:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.701 10:49:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.701 10:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.701 10:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111268416 kB' 'MemAvailable: 114508600 kB' 'Buffers: 12152 kB' 'Cached: 8828860 kB' 'SwapCached: 0 kB' 'Active: 6164536 kB' 'Inactive: 3404296 kB' 'Active(anon): 5620696 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 731212 kB' 'Mapped: 143676 kB' 'Shmem: 4892876 kB' 'KReclaimable: 232760 kB' 'Slab: 768160 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535400 kB' 'KernelStack: 26784 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8454484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231204 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.701 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.702 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.703 10:49:31 -- setup/common.sh@33 -- # echo 1024 00:03:34.703 10:49:31 -- setup/common.sh@33 -- # return 0 00:03:34.703 10:49:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.703 10:49:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.703 10:49:31 -- setup/hugepages.sh@27 -- # local node 00:03:34.703 10:49:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.703 10:49:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.703 10:49:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.703 10:49:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.703 10:49:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.703 10:49:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.703 10:49:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.703 10:49:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.703 10:49:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.703 10:49:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.703 10:49:31 -- setup/common.sh@18 -- # local node=0 00:03:34.703 10:49:31 -- setup/common.sh@19 -- # local var val 00:03:34.703 10:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.703 10:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.703 10:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.703 10:49:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.703 10:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.703 10:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54612036 kB' 'MemUsed: 11046972 kB' 'SwapCached: 0 kB' 'Active: 4903504 kB' 'Inactive: 3243776 kB' 'Active(anon): 4499156 kB' 'Inactive(anon): 0 kB' 'Active(file): 404348 kB' 'Inactive(file): 3243776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7935028 kB' 'Mapped: 120760 kB' 'AnonPages: 215516 kB' 'Shmem: 4286904 kB' 'KernelStack: 14296 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177460 kB' 'Slab: 524756 kB' 'SReclaimable: 177460 kB' 'SUnreclaim: 347296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.703 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@33 -- # echo 0 00:03:34.704 10:49:31 -- setup/common.sh@33 -- # return 0 00:03:34.704 10:49:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.704 10:49:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.704 10:49:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.704 10:49:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.704 10:49:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.704 10:49:31 -- setup/common.sh@18 -- # local node=1 00:03:34.704 10:49:31 -- setup/common.sh@19 -- # local var val 00:03:34.704 10:49:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.704 10:49:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.704 10:49:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.704 10:49:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.704 10:49:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.704 10:49:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 56657884 kB' 'MemUsed: 4021992 kB' 'SwapCached: 0 kB' 'Active: 1261400 kB' 'Inactive: 160520 kB' 'Active(anon): 1121908 kB' 'Inactive(anon): 0 kB' 'Active(file): 139492 kB' 'Inactive(file): 160520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 906000 kB' 'Mapped: 22916 kB' 'AnonPages: 516052 kB' 'Shmem: 605988 kB' 'KernelStack: 12504 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 55300 kB' 'Slab: 243404 kB' 'SReclaimable: 55300 kB' 'SUnreclaim: 188104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:49:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # continue 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.966 10:49:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.966 10:49:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.966 10:49:31 -- setup/common.sh@33 -- # echo 0 00:03:34.966 10:49:31 -- setup/common.sh@33 -- # return 0 00:03:34.966 10:49:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.966 10:49:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.966 10:49:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.966 10:49:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.966 node0=512 expecting 512 00:03:34.966 10:49:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.966 10:49:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.966 10:49:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.966 10:49:31 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:34.966 node1=512 expecting 512 00:03:34.966 10:49:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.966 00:03:34.966 real 0m3.886s 00:03:34.966 user 0m1.541s 00:03:34.966 sys 0m2.403s 00:03:34.966 10:49:31 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:34.966 10:49:31 -- common/autotest_common.sh@10 -- # set +x 00:03:34.966 ************************************ 00:03:34.966 END TEST even_2G_alloc 00:03:34.966 ************************************ 00:03:34.966 10:49:31 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:34.966 10:49:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:34.966 10:49:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:34.966 10:49:31 -- common/autotest_common.sh@10 -- # set +x 00:03:34.966 ************************************ 00:03:34.966 START TEST odd_alloc 00:03:34.966 ************************************ 00:03:34.966 10:49:31 -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:34.966 10:49:31 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:34.966 10:49:31 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:34.966 10:49:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:34.966 10:49:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.966 10:49:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.966 10:49:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.966 10:49:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:34.966 10:49:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.966 10:49:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.966 10:49:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.966 10:49:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.966 10:49:31 -- setup/hugepages.sh@83 -- # : 513 00:03:34.966 10:49:31 -- setup/hugepages.sh@84 -- # : 1 00:03:34.966 10:49:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:34.966 10:49:31 -- setup/hugepages.sh@83 -- # : 0 00:03:34.966 10:49:31 -- setup/hugepages.sh@84 -- # : 0 00:03:34.966 10:49:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.966 10:49:31 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:34.966 10:49:31 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:34.966 10:49:31 -- setup/hugepages.sh@160 -- # setup output 00:03:34.967 10:49:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.967 10:49:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.276 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:38.276 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.276 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.541 10:49:35 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:38.541 10:49:35 -- setup/hugepages.sh@89 -- # local node 00:03:38.541 10:49:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.541 10:49:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.541 10:49:35 -- setup/hugepages.sh@92 -- # local surp 00:03:38.541 10:49:35 -- setup/hugepages.sh@93 -- # local resv 00:03:38.541 10:49:35 -- setup/hugepages.sh@94 -- # local anon 00:03:38.541 10:49:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.541 10:49:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.541 10:49:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.541 10:49:35 -- setup/common.sh@18 -- # local node= 00:03:38.541 10:49:35 -- setup/common.sh@19 -- # local var val 00:03:38.541 10:49:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.541 10:49:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.541 10:49:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.541 10:49:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.542 10:49:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.542 10:49:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111239852 kB' 'MemAvailable: 114480036 kB' 'Buffers: 12152 kB' 'Cached: 8828956 kB' 'SwapCached: 0 kB' 'Active: 6170172 kB' 'Inactive: 3404296 kB' 'Active(anon): 5626332 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 736312 kB' 'Mapped: 143796 kB' 'Shmem: 4892972 kB' 'KReclaimable: 232760 kB' 'Slab: 768524 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535764 kB' 'KernelStack: 26816 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8455360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231220 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.542 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 10:49:35 -- setup/common.sh@33 -- # echo 0 00:03:38.543 10:49:35 -- setup/common.sh@33 -- # return 0 00:03:38.543 10:49:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.543 10:49:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.543 10:49:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.543 10:49:35 -- setup/common.sh@18 -- # local node= 00:03:38.543 10:49:35 -- setup/common.sh@19 -- # local var val 00:03:38.543 10:49:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.543 10:49:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.543 10:49:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.543 10:49:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.543 10:49:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.543 10:49:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111242520 kB' 'MemAvailable: 114482704 kB' 'Buffers: 12152 kB' 'Cached: 8828956 kB' 'SwapCached: 0 kB' 'Active: 6169584 kB' 'Inactive: 3404296 kB' 'Active(anon): 5625744 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 735816 kB' 'Mapped: 143768 kB' 'Shmem: 4892972 kB' 'KReclaimable: 232760 kB' 'Slab: 768520 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535760 kB' 'KernelStack: 26848 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8455372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231204 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 10:49:35 -- setup/common.sh@33 -- # echo 0 00:03:38.544 10:49:35 -- setup/common.sh@33 -- # return 0 00:03:38.544 10:49:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.544 10:49:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.544 10:49:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.544 10:49:35 -- setup/common.sh@18 -- # local node= 00:03:38.544 10:49:35 -- setup/common.sh@19 -- # local var val 00:03:38.544 10:49:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.544 10:49:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.544 10:49:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.544 10:49:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.544 10:49:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.544 10:49:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111242192 kB' 'MemAvailable: 114482376 kB' 'Buffers: 12152 kB' 'Cached: 8828956 kB' 'SwapCached: 0 kB' 'Active: 6168904 kB' 'Inactive: 3404296 kB' 'Active(anon): 5625064 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 735552 kB' 'Mapped: 143692 kB' 'Shmem: 4892972 kB' 'KReclaimable: 232760 kB' 'Slab: 768508 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535748 kB' 'KernelStack: 26816 kB' 'PageTables: 7580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8455388 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231220 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.544 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.544 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.809 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.810 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.810 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.810 10:49:35 -- setup/common.sh@33 -- # echo 0 00:03:38.810 10:49:35 -- setup/common.sh@33 -- # return 0 00:03:38.810 10:49:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.810 10:49:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:38.810 nr_hugepages=1025 00:03:38.811 10:49:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.811 resv_hugepages=0 00:03:38.811 10:49:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.811 surplus_hugepages=0 00:03:38.811 10:49:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.811 anon_hugepages=0 00:03:38.811 10:49:35 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:38.811 10:49:35 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:38.811 10:49:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.811 10:49:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.811 10:49:35 -- setup/common.sh@18 -- # local node= 00:03:38.811 10:49:35 -- setup/common.sh@19 -- # local var val 00:03:38.811 10:49:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.811 10:49:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.811 10:49:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.811 10:49:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.811 10:49:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.811 10:49:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111244824 kB' 'MemAvailable: 114485008 kB' 'Buffers: 12152 kB' 'Cached: 8828980 kB' 'SwapCached: 0 kB' 'Active: 6169456 kB' 'Inactive: 3404296 kB' 'Active(anon): 5625616 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 736040 kB' 'Mapped: 143692 kB' 'Shmem: 4892996 kB' 'KReclaimable: 232760 kB' 'Slab: 768508 kB' 'SReclaimable: 232760 kB' 'SUnreclaim: 535748 kB' 'KernelStack: 26768 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8458312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231236 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.811 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.811 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.812 10:49:35 -- setup/common.sh@33 -- # echo 1025 00:03:38.812 10:49:35 -- setup/common.sh@33 -- # return 0 00:03:38.812 10:49:35 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:38.812 10:49:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.812 10:49:35 -- setup/hugepages.sh@27 -- # local node 00:03:38.812 10:49:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.812 10:49:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.812 10:49:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.812 10:49:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:38.812 10:49:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.812 10:49:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.812 10:49:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.812 10:49:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.812 10:49:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.812 10:49:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.812 10:49:35 -- setup/common.sh@18 -- # local node=0 00:03:38.812 10:49:35 -- setup/common.sh@19 -- # local var val 00:03:38.812 10:49:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.812 10:49:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.812 10:49:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.812 10:49:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.812 10:49:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.812 10:49:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54592568 kB' 'MemUsed: 11066440 kB' 'SwapCached: 0 kB' 'Active: 4903808 kB' 'Inactive: 3243776 kB' 'Active(anon): 4499460 kB' 'Inactive(anon): 0 kB' 'Active(file): 404348 kB' 'Inactive(file): 3243776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7935076 kB' 'Mapped: 120776 kB' 'AnonPages: 215752 kB' 'Shmem: 4286952 kB' 'KernelStack: 14344 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177460 kB' 'Slab: 525196 kB' 'SReclaimable: 177460 kB' 'SUnreclaim: 347736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.812 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.812 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@33 -- # echo 0 00:03:38.813 10:49:35 -- setup/common.sh@33 -- # return 0 00:03:38.813 10:49:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.813 10:49:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.813 10:49:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.813 10:49:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.813 10:49:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.813 10:49:35 -- setup/common.sh@18 -- # local node=1 00:03:38.813 10:49:35 -- setup/common.sh@19 -- # local var val 00:03:38.813 10:49:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.813 10:49:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.813 10:49:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.813 10:49:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.813 10:49:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.813 10:49:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 56653076 kB' 'MemUsed: 4026800 kB' 'SwapCached: 0 kB' 'Active: 1265700 kB' 'Inactive: 160520 kB' 'Active(anon): 1126208 kB' 'Inactive(anon): 0 kB' 'Active(file): 139492 kB' 'Inactive(file): 160520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 906076 kB' 'Mapped: 22916 kB' 'AnonPages: 520300 kB' 'Shmem: 606064 kB' 'KernelStack: 12520 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 55300 kB' 'Slab: 243320 kB' 'SReclaimable: 55300 kB' 'SUnreclaim: 188020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.813 10:49:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # continue 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 10:49:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 10:49:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.814 10:49:35 -- setup/common.sh@33 -- # echo 0 00:03:38.814 10:49:35 -- setup/common.sh@33 -- # return 0 00:03:38.814 10:49:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.814 10:49:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.814 10:49:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.814 10:49:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.814 10:49:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:38.814 node0=512 expecting 513 00:03:38.814 10:49:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.814 10:49:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.814 10:49:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.814 10:49:35 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:38.814 node1=513 expecting 512 00:03:38.814 10:49:35 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:38.814 00:03:38.814 real 0m3.865s 00:03:38.814 user 0m1.565s 00:03:38.814 sys 0m2.351s 00:03:38.814 10:49:35 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:38.814 10:49:35 -- common/autotest_common.sh@10 -- # set +x 00:03:38.814 ************************************ 00:03:38.814 END TEST odd_alloc 00:03:38.814 ************************************ 00:03:38.814 10:49:35 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:38.814 10:49:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.814 10:49:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.814 10:49:35 -- common/autotest_common.sh@10 -- # set +x 00:03:38.814 ************************************ 00:03:38.814 START TEST custom_alloc 00:03:38.814 ************************************ 00:03:38.814 10:49:35 -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:38.814 10:49:35 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:38.814 10:49:35 -- setup/hugepages.sh@169 -- # local node 00:03:38.814 10:49:35 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:38.814 10:49:35 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:38.814 10:49:35 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:38.814 10:49:35 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:38.814 10:49:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:38.814 10:49:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.814 10:49:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.814 10:49:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:38.814 10:49:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.814 10:49:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.814 10:49:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.814 10:49:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:38.814 10:49:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.814 10:49:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.814 10:49:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.814 10:49:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.814 10:49:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.814 10:49:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.814 10:49:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:38.814 10:49:35 -- setup/hugepages.sh@83 -- # : 256 00:03:38.814 10:49:35 -- setup/hugepages.sh@84 -- # : 1 00:03:38.814 10:49:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.814 10:49:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:38.814 10:49:35 -- setup/hugepages.sh@83 -- # : 0 00:03:38.814 10:49:35 -- setup/hugepages.sh@84 -- # : 0 00:03:38.815 10:49:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:38.815 10:49:35 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:38.815 10:49:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.815 10:49:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.815 10:49:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.815 10:49:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.815 10:49:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.815 10:49:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.815 10:49:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.815 10:49:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.815 10:49:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.815 10:49:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.815 10:49:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:38.815 10:49:35 -- setup/hugepages.sh@78 -- # return 0 00:03:38.815 10:49:35 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:38.815 10:49:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:38.815 10:49:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:38.815 10:49:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:38.815 10:49:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:38.815 10:49:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:38.815 10:49:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.815 10:49:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.815 10:49:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.815 10:49:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.815 10:49:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.815 10:49:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.815 10:49:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:38.815 10:49:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.815 10:49:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:38.815 10:49:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.815 10:49:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:38.815 10:49:35 -- setup/hugepages.sh@78 -- # return 0 00:03:38.815 10:49:35 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:38.815 10:49:35 -- setup/hugepages.sh@187 -- # setup output 00:03:38.815 10:49:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.815 10:49:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.121 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:42.121 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.121 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.701 10:49:39 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:42.701 10:49:39 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:42.701 10:49:39 -- setup/hugepages.sh@89 -- # local node 00:03:42.701 10:49:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.701 10:49:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.701 10:49:39 -- setup/hugepages.sh@92 -- # local surp 00:03:42.701 10:49:39 -- setup/hugepages.sh@93 -- # local resv 00:03:42.701 10:49:39 -- setup/hugepages.sh@94 -- # local anon 00:03:42.701 10:49:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.701 10:49:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.701 10:49:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.701 10:49:39 -- setup/common.sh@18 -- # local node= 00:03:42.701 10:49:39 -- setup/common.sh@19 -- # local var val 00:03:42.701 10:49:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.701 10:49:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.701 10:49:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.701 10:49:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.701 10:49:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.701 10:49:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 110191052 kB' 'MemAvailable: 113431252 kB' 'Buffers: 12152 kB' 'Cached: 8829096 kB' 'SwapCached: 0 kB' 'Active: 6173280 kB' 'Inactive: 3404296 kB' 'Active(anon): 5629440 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 739736 kB' 'Mapped: 143716 kB' 'Shmem: 4893112 kB' 'KReclaimable: 232792 kB' 'Slab: 769540 kB' 'SReclaimable: 232792 kB' 'SUnreclaim: 536748 kB' 'KernelStack: 26832 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8459048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231284 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.701 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.701 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.702 10:49:39 -- setup/common.sh@33 -- # echo 0 00:03:42.702 10:49:39 -- setup/common.sh@33 -- # return 0 00:03:42.702 10:49:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.702 10:49:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.702 10:49:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.702 10:49:39 -- setup/common.sh@18 -- # local node= 00:03:42.702 10:49:39 -- setup/common.sh@19 -- # local var val 00:03:42.702 10:49:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.702 10:49:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.702 10:49:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.702 10:49:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.702 10:49:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.702 10:49:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 110189788 kB' 'MemAvailable: 113429988 kB' 'Buffers: 12152 kB' 'Cached: 8829096 kB' 'SwapCached: 0 kB' 'Active: 6173348 kB' 'Inactive: 3404296 kB' 'Active(anon): 5629508 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 740332 kB' 'Mapped: 143712 kB' 'Shmem: 4893112 kB' 'KReclaimable: 232792 kB' 'Slab: 769520 kB' 'SReclaimable: 232792 kB' 'SUnreclaim: 536728 kB' 'KernelStack: 26976 kB' 'PageTables: 7364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8458692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231332 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.702 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.702 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.703 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.703 10:49:39 -- setup/common.sh@33 -- # echo 0 00:03:42.703 10:49:39 -- setup/common.sh@33 -- # return 0 00:03:42.703 10:49:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.703 10:49:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.703 10:49:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.703 10:49:39 -- setup/common.sh@18 -- # local node= 00:03:42.703 10:49:39 -- setup/common.sh@19 -- # local var val 00:03:42.703 10:49:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.703 10:49:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.703 10:49:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.703 10:49:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.703 10:49:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.703 10:49:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.703 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 110190804 kB' 'MemAvailable: 113431020 kB' 'Buffers: 12152 kB' 'Cached: 8829108 kB' 'SwapCached: 0 kB' 'Active: 6173240 kB' 'Inactive: 3404296 kB' 'Active(anon): 5629400 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 740196 kB' 'Mapped: 143712 kB' 'Shmem: 4893124 kB' 'KReclaimable: 232824 kB' 'Slab: 769288 kB' 'SReclaimable: 232824 kB' 'SUnreclaim: 536464 kB' 'KernelStack: 26880 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8458704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.704 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.704 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.705 10:49:39 -- setup/common.sh@33 -- # echo 0 00:03:42.705 10:49:39 -- setup/common.sh@33 -- # return 0 00:03:42.705 10:49:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.705 10:49:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:42.705 nr_hugepages=1536 00:03:42.705 10:49:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.705 resv_hugepages=0 00:03:42.705 10:49:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.705 surplus_hugepages=0 00:03:42.705 10:49:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.705 anon_hugepages=0 00:03:42.705 10:49:39 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:42.705 10:49:39 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:42.705 10:49:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.705 10:49:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.705 10:49:39 -- setup/common.sh@18 -- # local node= 00:03:42.705 10:49:39 -- setup/common.sh@19 -- # local var val 00:03:42.705 10:49:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.705 10:49:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.705 10:49:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.705 10:49:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.705 10:49:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.705 10:49:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 110193420 kB' 'MemAvailable: 113433636 kB' 'Buffers: 12152 kB' 'Cached: 8829108 kB' 'SwapCached: 0 kB' 'Active: 6173352 kB' 'Inactive: 3404296 kB' 'Active(anon): 5629512 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 739784 kB' 'Mapped: 143712 kB' 'Shmem: 4893124 kB' 'KReclaimable: 232824 kB' 'Slab: 769288 kB' 'SReclaimable: 232824 kB' 'SUnreclaim: 536464 kB' 'KernelStack: 26848 kB' 'PageTables: 7292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8457088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.705 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.705 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.706 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.706 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.706 10:49:39 -- setup/common.sh@33 -- # echo 1536 00:03:42.706 10:49:39 -- setup/common.sh@33 -- # return 0 00:03:42.706 10:49:39 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:42.706 10:49:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.706 10:49:39 -- setup/hugepages.sh@27 -- # local node 00:03:42.706 10:49:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.706 10:49:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.706 10:49:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.706 10:49:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.706 10:49:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.706 10:49:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.706 10:49:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.706 10:49:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.706 10:49:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.706 10:49:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.706 10:49:39 -- setup/common.sh@18 -- # local node=0 00:03:42.707 10:49:39 -- setup/common.sh@19 -- # local var val 00:03:42.707 10:49:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.707 10:49:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.707 10:49:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.707 10:49:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.707 10:49:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.707 10:49:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54597624 kB' 'MemUsed: 11061384 kB' 'SwapCached: 0 kB' 'Active: 4903712 kB' 'Inactive: 3243776 kB' 'Active(anon): 4499364 kB' 'Inactive(anon): 0 kB' 'Active(file): 404348 kB' 'Inactive(file): 3243776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7935164 kB' 'Mapped: 120796 kB' 'AnonPages: 215516 kB' 'Shmem: 4287040 kB' 'KernelStack: 14392 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177492 kB' 'Slab: 525556 kB' 'SReclaimable: 177492 kB' 'SUnreclaim: 348064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.707 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.707 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@33 -- # echo 0 00:03:42.708 10:49:39 -- setup/common.sh@33 -- # return 0 00:03:42.708 10:49:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.708 10:49:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.708 10:49:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.708 10:49:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.708 10:49:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.708 10:49:39 -- setup/common.sh@18 -- # local node=1 00:03:42.708 10:49:39 -- setup/common.sh@19 -- # local var val 00:03:42.708 10:49:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.708 10:49:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.708 10:49:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.708 10:49:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.708 10:49:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.708 10:49:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 55595348 kB' 'MemUsed: 5084528 kB' 'SwapCached: 0 kB' 'Active: 1269832 kB' 'Inactive: 160520 kB' 'Active(anon): 1130340 kB' 'Inactive(anon): 0 kB' 'Active(file): 139492 kB' 'Inactive(file): 160520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 906128 kB' 'Mapped: 22916 kB' 'AnonPages: 524424 kB' 'Shmem: 606116 kB' 'KernelStack: 12488 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 55332 kB' 'Slab: 243668 kB' 'SReclaimable: 55332 kB' 'SUnreclaim: 188336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.708 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.708 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.709 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 10:49:39 -- setup/common.sh@32 -- # continue 00:03:42.709 10:49:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.709 10:49:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.709 10:49:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.709 10:49:39 -- setup/common.sh@33 -- # echo 0 00:03:42.709 10:49:39 -- setup/common.sh@33 -- # return 0 00:03:42.709 10:49:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.709 10:49:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.709 10:49:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.709 10:49:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.709 10:49:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.709 node0=512 expecting 512 00:03:42.709 10:49:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.709 10:49:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.709 10:49:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.709 10:49:39 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:42.709 node1=1024 expecting 1024 00:03:42.709 10:49:39 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:42.709 00:03:42.709 real 0m3.842s 00:03:42.709 user 0m1.549s 00:03:42.709 sys 0m2.352s 00:03:42.709 10:49:39 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:42.709 10:49:39 -- common/autotest_common.sh@10 -- # set +x 00:03:42.709 ************************************ 00:03:42.709 END TEST custom_alloc 00:03:42.709 ************************************ 00:03:42.709 10:49:39 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:42.709 10:49:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.709 10:49:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.709 10:49:39 -- common/autotest_common.sh@10 -- # set +x 00:03:42.709 ************************************ 00:03:42.709 START TEST no_shrink_alloc 00:03:42.709 ************************************ 00:03:42.709 10:49:39 -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:42.709 10:49:39 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:42.709 10:49:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.709 10:49:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.709 10:49:39 -- setup/hugepages.sh@51 -- # shift 00:03:42.709 10:49:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.709 10:49:39 -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.709 10:49:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.709 10:49:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.709 10:49:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.709 10:49:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.709 10:49:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.709 10:49:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.709 10:49:39 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.709 10:49:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.709 10:49:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.709 10:49:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.709 10:49:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.709 10:49:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.709 10:49:39 -- setup/hugepages.sh@73 -- # return 0 00:03:42.709 10:49:39 -- setup/hugepages.sh@198 -- # setup output 00:03:42.709 10:49:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.709 10:49:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.020 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.020 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.020 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.597 10:49:42 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:46.597 10:49:42 -- setup/hugepages.sh@89 -- # local node 00:03:46.597 10:49:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.597 10:49:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.597 10:49:42 -- setup/hugepages.sh@92 -- # local surp 00:03:46.597 10:49:42 -- setup/hugepages.sh@93 -- # local resv 00:03:46.597 10:49:42 -- setup/hugepages.sh@94 -- # local anon 00:03:46.597 10:49:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.597 10:49:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.597 10:49:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.597 10:49:42 -- setup/common.sh@18 -- # local node= 00:03:46.597 10:49:42 -- setup/common.sh@19 -- # local var val 00:03:46.597 10:49:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.597 10:49:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.597 10:49:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.597 10:49:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.597 10:49:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.597 10:49:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111211672 kB' 'MemAvailable: 114451892 kB' 'Buffers: 12152 kB' 'Cached: 8829248 kB' 'SwapCached: 0 kB' 'Active: 6178200 kB' 'Inactive: 3404296 kB' 'Active(anon): 5634360 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 743956 kB' 'Mapped: 143848 kB' 'Shmem: 4893264 kB' 'KReclaimable: 232832 kB' 'Slab: 768636 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 535804 kB' 'KernelStack: 26784 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8457192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.597 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.597 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.597 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.597 10:49:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.598 10:49:43 -- setup/common.sh@33 -- # echo 0 00:03:46.598 10:49:43 -- setup/common.sh@33 -- # return 0 00:03:46.598 10:49:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.598 10:49:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.598 10:49:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.598 10:49:43 -- setup/common.sh@18 -- # local node= 00:03:46.598 10:49:43 -- setup/common.sh@19 -- # local var val 00:03:46.598 10:49:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.598 10:49:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.598 10:49:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.598 10:49:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.598 10:49:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.598 10:49:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111211852 kB' 'MemAvailable: 114452072 kB' 'Buffers: 12152 kB' 'Cached: 8829248 kB' 'SwapCached: 0 kB' 'Active: 6177896 kB' 'Inactive: 3404296 kB' 'Active(anon): 5634056 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 744140 kB' 'Mapped: 143744 kB' 'Shmem: 4893264 kB' 'KReclaimable: 232832 kB' 'Slab: 768664 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 535832 kB' 'KernelStack: 26784 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8457204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.598 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.598 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.599 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.599 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.600 10:49:43 -- setup/common.sh@33 -- # echo 0 00:03:46.600 10:49:43 -- setup/common.sh@33 -- # return 0 00:03:46.600 10:49:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.600 10:49:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.600 10:49:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.600 10:49:43 -- setup/common.sh@18 -- # local node= 00:03:46.600 10:49:43 -- setup/common.sh@19 -- # local var val 00:03:46.600 10:49:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.600 10:49:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.600 10:49:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.600 10:49:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.600 10:49:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.600 10:49:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111211804 kB' 'MemAvailable: 114452024 kB' 'Buffers: 12152 kB' 'Cached: 8829260 kB' 'SwapCached: 0 kB' 'Active: 6177916 kB' 'Inactive: 3404296 kB' 'Active(anon): 5634076 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 744144 kB' 'Mapped: 143744 kB' 'Shmem: 4893276 kB' 'KReclaimable: 232832 kB' 'Slab: 768664 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 535832 kB' 'KernelStack: 26784 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8457220 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.600 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.600 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.601 10:49:43 -- setup/common.sh@33 -- # echo 0 00:03:46.601 10:49:43 -- setup/common.sh@33 -- # return 0 00:03:46.601 10:49:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.601 10:49:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.601 nr_hugepages=1024 00:03:46.601 10:49:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.601 resv_hugepages=0 00:03:46.601 10:49:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.601 surplus_hugepages=0 00:03:46.601 10:49:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.601 anon_hugepages=0 00:03:46.601 10:49:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.601 10:49:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.601 10:49:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.601 10:49:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.601 10:49:43 -- setup/common.sh@18 -- # local node= 00:03:46.601 10:49:43 -- setup/common.sh@19 -- # local var val 00:03:46.601 10:49:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.601 10:49:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.601 10:49:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.601 10:49:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.601 10:49:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.601 10:49:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111212740 kB' 'MemAvailable: 114452960 kB' 'Buffers: 12152 kB' 'Cached: 8829288 kB' 'SwapCached: 0 kB' 'Active: 6177580 kB' 'Inactive: 3404296 kB' 'Active(anon): 5633740 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 743740 kB' 'Mapped: 143744 kB' 'Shmem: 4893304 kB' 'KReclaimable: 232832 kB' 'Slab: 768664 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 535832 kB' 'KernelStack: 26768 kB' 'PageTables: 7500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8457232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.601 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.601 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.602 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.602 10:49:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.603 10:49:43 -- setup/common.sh@33 -- # echo 1024 00:03:46.603 10:49:43 -- setup/common.sh@33 -- # return 0 00:03:46.603 10:49:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.603 10:49:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.603 10:49:43 -- setup/hugepages.sh@27 -- # local node 00:03:46.603 10:49:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.603 10:49:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.603 10:49:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.603 10:49:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.603 10:49:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.603 10:49:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.603 10:49:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.603 10:49:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.603 10:49:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.603 10:49:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.603 10:49:43 -- setup/common.sh@18 -- # local node=0 00:03:46.603 10:49:43 -- setup/common.sh@19 -- # local var val 00:03:46.603 10:49:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.603 10:49:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.603 10:49:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.603 10:49:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.603 10:49:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.603 10:49:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53551440 kB' 'MemUsed: 12107568 kB' 'SwapCached: 0 kB' 'Active: 4906884 kB' 'Inactive: 3243776 kB' 'Active(anon): 4502536 kB' 'Inactive(anon): 0 kB' 'Active(file): 404348 kB' 'Inactive(file): 3243776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7935244 kB' 'Mapped: 120828 kB' 'AnonPages: 218620 kB' 'Shmem: 4287120 kB' 'KernelStack: 14280 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177484 kB' 'Slab: 525568 kB' 'SReclaimable: 177484 kB' 'SUnreclaim: 348084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.603 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.603 10:49:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # continue 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.604 10:49:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.604 10:49:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.604 10:49:43 -- setup/common.sh@33 -- # echo 0 00:03:46.604 10:49:43 -- setup/common.sh@33 -- # return 0 00:03:46.604 10:49:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.604 10:49:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.604 10:49:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.604 10:49:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.604 10:49:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.604 node0=1024 expecting 1024 00:03:46.604 10:49:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.604 10:49:43 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:46.604 10:49:43 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:46.604 10:49:43 -- setup/hugepages.sh@202 -- # setup output 00:03:46.604 10:49:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.604 10:49:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.909 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:49.909 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.909 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.172 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:50.172 10:49:46 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:50.172 10:49:46 -- setup/hugepages.sh@89 -- # local node 00:03:50.172 10:49:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.172 10:49:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.172 10:49:46 -- setup/hugepages.sh@92 -- # local surp 00:03:50.172 10:49:46 -- setup/hugepages.sh@93 -- # local resv 00:03:50.172 10:49:46 -- setup/hugepages.sh@94 -- # local anon 00:03:50.172 10:49:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.438 10:49:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.438 10:49:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.438 10:49:46 -- setup/common.sh@18 -- # local node= 00:03:50.438 10:49:46 -- setup/common.sh@19 -- # local var val 00:03:50.438 10:49:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.438 10:49:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.438 10:49:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.438 10:49:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.438 10:49:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.438 10:49:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111212356 kB' 'MemAvailable: 114452576 kB' 'Buffers: 12152 kB' 'Cached: 8829368 kB' 'SwapCached: 0 kB' 'Active: 6181452 kB' 'Inactive: 3404296 kB' 'Active(anon): 5637612 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 747544 kB' 'Mapped: 143760 kB' 'Shmem: 4893384 kB' 'KReclaimable: 232832 kB' 'Slab: 768808 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 535976 kB' 'KernelStack: 26752 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8457960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231108 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.438 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.438 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.439 10:49:46 -- setup/common.sh@33 -- # echo 0 00:03:50.439 10:49:46 -- setup/common.sh@33 -- # return 0 00:03:50.439 10:49:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.439 10:49:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.439 10:49:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.439 10:49:46 -- setup/common.sh@18 -- # local node= 00:03:50.439 10:49:46 -- setup/common.sh@19 -- # local var val 00:03:50.439 10:49:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.439 10:49:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.439 10:49:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.439 10:49:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.439 10:49:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.439 10:49:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111212084 kB' 'MemAvailable: 114452304 kB' 'Buffers: 12152 kB' 'Cached: 8829368 kB' 'SwapCached: 0 kB' 'Active: 6181812 kB' 'Inactive: 3404296 kB' 'Active(anon): 5637972 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748008 kB' 'Mapped: 143756 kB' 'Shmem: 4893384 kB' 'KReclaimable: 232832 kB' 'Slab: 768960 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 536128 kB' 'KernelStack: 26784 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8457972 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231060 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.439 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.439 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.440 10:49:46 -- setup/common.sh@33 -- # echo 0 00:03:50.440 10:49:46 -- setup/common.sh@33 -- # return 0 00:03:50.440 10:49:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.440 10:49:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.440 10:49:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.440 10:49:46 -- setup/common.sh@18 -- # local node= 00:03:50.440 10:49:46 -- setup/common.sh@19 -- # local var val 00:03:50.440 10:49:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.440 10:49:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.440 10:49:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.440 10:49:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.440 10:49:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.440 10:49:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.440 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.440 10:49:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111212084 kB' 'MemAvailable: 114452304 kB' 'Buffers: 12152 kB' 'Cached: 8829368 kB' 'SwapCached: 0 kB' 'Active: 6181812 kB' 'Inactive: 3404296 kB' 'Active(anon): 5637972 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748008 kB' 'Mapped: 143756 kB' 'Shmem: 4893384 kB' 'KReclaimable: 232832 kB' 'Slab: 768960 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 536128 kB' 'KernelStack: 26784 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8457988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231060 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:50.440 10:49:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.441 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.441 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.442 10:49:46 -- setup/common.sh@33 -- # echo 0 00:03:50.442 10:49:46 -- setup/common.sh@33 -- # return 0 00:03:50.442 10:49:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.442 10:49:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.442 nr_hugepages=1024 00:03:50.442 10:49:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.442 resv_hugepages=0 00:03:50.442 10:49:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.442 surplus_hugepages=0 00:03:50.442 10:49:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.442 anon_hugepages=0 00:03:50.442 10:49:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.442 10:49:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.442 10:49:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.442 10:49:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.442 10:49:46 -- setup/common.sh@18 -- # local node= 00:03:50.442 10:49:46 -- setup/common.sh@19 -- # local var val 00:03:50.442 10:49:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.442 10:49:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.442 10:49:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.442 10:49:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.442 10:49:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.442 10:49:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 111212684 kB' 'MemAvailable: 114452904 kB' 'Buffers: 12152 kB' 'Cached: 8829408 kB' 'SwapCached: 0 kB' 'Active: 6181492 kB' 'Inactive: 3404296 kB' 'Active(anon): 5637652 kB' 'Inactive(anon): 0 kB' 'Active(file): 543840 kB' 'Inactive(file): 3404296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 747612 kB' 'Mapped: 143756 kB' 'Shmem: 4893424 kB' 'KReclaimable: 232832 kB' 'Slab: 768960 kB' 'SReclaimable: 232832 kB' 'SUnreclaim: 536128 kB' 'KernelStack: 26768 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8458000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231060 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 499060 kB' 'DirectMap2M: 11763712 kB' 'DirectMap1G: 123731968 kB' 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.442 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.442 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.443 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.443 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.443 10:49:46 -- setup/common.sh@33 -- # echo 1024 00:03:50.443 10:49:46 -- setup/common.sh@33 -- # return 0 00:03:50.443 10:49:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.444 10:49:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.444 10:49:46 -- setup/hugepages.sh@27 -- # local node 00:03:50.444 10:49:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.444 10:49:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.444 10:49:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.444 10:49:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.444 10:49:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.444 10:49:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.444 10:49:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.444 10:49:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.444 10:49:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.444 10:49:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.444 10:49:46 -- setup/common.sh@18 -- # local node=0 00:03:50.444 10:49:46 -- setup/common.sh@19 -- # local var val 00:03:50.444 10:49:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.444 10:49:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.444 10:49:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.444 10:49:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.444 10:49:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.444 10:49:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53548100 kB' 'MemUsed: 12110908 kB' 'SwapCached: 0 kB' 'Active: 4905528 kB' 'Inactive: 3243776 kB' 'Active(anon): 4501180 kB' 'Inactive(anon): 0 kB' 'Active(file): 404348 kB' 'Inactive(file): 3243776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7935260 kB' 'Mapped: 120840 kB' 'AnonPages: 217224 kB' 'Shmem: 4287136 kB' 'KernelStack: 14280 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177484 kB' 'Slab: 525760 kB' 'SReclaimable: 177484 kB' 'SUnreclaim: 348276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.444 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.444 10:49:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.445 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.445 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.445 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.445 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.445 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.445 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.445 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.445 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.445 10:49:46 -- setup/common.sh@32 -- # continue 00:03:50.445 10:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.445 10:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.445 10:49:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.445 10:49:46 -- setup/common.sh@33 -- # echo 0 00:03:50.445 10:49:46 -- setup/common.sh@33 -- # return 0 00:03:50.445 10:49:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.445 10:49:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.445 10:49:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.445 10:49:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.445 10:49:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.445 node0=1024 expecting 1024 00:03:50.445 10:49:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.445 00:03:50.445 real 0m7.652s 00:03:50.445 user 0m3.129s 00:03:50.445 sys 0m4.634s 00:03:50.445 10:49:46 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.445 10:49:46 -- common/autotest_common.sh@10 -- # set +x 00:03:50.445 ************************************ 00:03:50.445 END TEST no_shrink_alloc 00:03:50.445 ************************************ 00:03:50.445 10:49:47 -- setup/hugepages.sh@217 -- # clear_hp 00:03:50.445 10:49:47 -- setup/hugepages.sh@37 -- # local node hp 00:03:50.445 10:49:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:50.445 10:49:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.445 10:49:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:50.445 10:49:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.445 10:49:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:50.445 10:49:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:50.445 10:49:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.445 10:49:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:50.445 10:49:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.445 10:49:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:50.445 10:49:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:50.445 10:49:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:50.445 00:03:50.445 real 0m27.912s 00:03:50.445 user 0m11.314s 00:03:50.445 sys 0m16.978s 00:03:50.445 10:49:47 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.445 10:49:47 -- common/autotest_common.sh@10 -- # set +x 00:03:50.445 ************************************ 00:03:50.445 END TEST hugepages 00:03:50.445 ************************************ 00:03:50.445 10:49:47 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:50.445 10:49:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.445 10:49:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.445 10:49:47 -- common/autotest_common.sh@10 -- # set +x 00:03:50.706 ************************************ 00:03:50.706 START TEST driver 00:03:50.706 ************************************ 00:03:50.706 10:49:47 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:50.706 * Looking for test storage... 00:03:50.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:50.706 10:49:47 -- setup/driver.sh@68 -- # setup reset 00:03:50.706 10:49:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.706 10:49:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.002 10:49:52 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:56.002 10:49:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.002 10:49:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.002 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:03:56.002 ************************************ 00:03:56.002 START TEST guess_driver 00:03:56.002 ************************************ 00:03:56.002 10:49:52 -- common/autotest_common.sh@1121 -- # guess_driver 00:03:56.002 10:49:52 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:56.002 10:49:52 -- setup/driver.sh@47 -- # local fail=0 00:03:56.002 10:49:52 -- setup/driver.sh@49 -- # pick_driver 00:03:56.002 10:49:52 -- setup/driver.sh@36 -- # vfio 00:03:56.002 10:49:52 -- setup/driver.sh@21 -- # local iommu_grups 00:03:56.002 10:49:52 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:56.002 10:49:52 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:56.002 10:49:52 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:56.002 10:49:52 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:56.002 10:49:52 -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:03:56.002 10:49:52 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:56.002 10:49:52 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:56.002 10:49:52 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:56.002 10:49:52 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:56.002 10:49:52 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:56.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:56.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:56.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:56.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:56.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:56.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:56.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:56.002 10:49:52 -- setup/driver.sh@30 -- # return 0 00:03:56.002 10:49:52 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:56.002 10:49:52 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:56.002 10:49:52 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:56.002 10:49:52 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:56.002 Looking for driver=vfio-pci 00:03:56.002 10:49:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.002 10:49:52 -- setup/driver.sh@45 -- # setup output config 00:03:56.002 10:49:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.002 10:49:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.307 10:49:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.307 10:49:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.307 10:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.878 10:49:56 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:59.878 10:49:56 -- setup/driver.sh@65 -- # setup reset 00:03:59.878 10:49:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.878 10:49:56 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.175 00:04:05.175 real 0m8.952s 00:04:05.175 user 0m2.868s 00:04:05.175 sys 0m5.220s 00:04:05.175 10:50:01 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.175 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:04:05.175 ************************************ 00:04:05.175 END TEST guess_driver 00:04:05.175 ************************************ 00:04:05.175 00:04:05.175 real 0m14.202s 00:04:05.175 user 0m4.450s 00:04:05.175 sys 0m7.989s 00:04:05.175 10:50:01 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.175 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:04:05.175 ************************************ 00:04:05.175 END TEST driver 00:04:05.175 ************************************ 00:04:05.175 10:50:01 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:05.175 10:50:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:05.175 10:50:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.175 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:04:05.175 ************************************ 00:04:05.175 START TEST devices 00:04:05.175 ************************************ 00:04:05.175 10:50:01 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:05.175 * Looking for test storage... 00:04:05.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:05.175 10:50:01 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:05.175 10:50:01 -- setup/devices.sh@192 -- # setup reset 00:04:05.175 10:50:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.175 10:50:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.389 10:50:05 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:09.389 10:50:05 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:09.389 10:50:05 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:09.389 10:50:05 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:09.389 10:50:05 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:09.389 10:50:05 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:09.389 10:50:05 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:09.389 10:50:05 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.389 10:50:05 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:09.389 10:50:05 -- setup/devices.sh@196 -- # blocks=() 00:04:09.389 10:50:05 -- setup/devices.sh@196 -- # declare -a blocks 00:04:09.389 10:50:05 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:09.389 10:50:05 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:09.389 10:50:05 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:09.389 10:50:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.389 10:50:05 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:09.389 10:50:05 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.389 10:50:05 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:09.389 10:50:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:09.389 10:50:05 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:09.389 10:50:05 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:09.389 10:50:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:09.389 No valid GPT data, bailing 00:04:09.389 10:50:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.389 10:50:05 -- scripts/common.sh@391 -- # pt= 00:04:09.389 10:50:05 -- scripts/common.sh@392 -- # return 1 00:04:09.389 10:50:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:09.389 10:50:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:09.389 10:50:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:09.389 10:50:05 -- setup/common.sh@80 -- # echo 1920383410176 00:04:09.389 10:50:05 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:09.389 10:50:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.389 10:50:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:09.389 10:50:05 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:09.389 10:50:05 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:09.389 10:50:05 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:09.389 10:50:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.389 10:50:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.389 10:50:05 -- common/autotest_common.sh@10 -- # set +x 00:04:09.389 ************************************ 00:04:09.389 START TEST nvme_mount 00:04:09.389 ************************************ 00:04:09.389 10:50:05 -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:09.389 10:50:05 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:09.389 10:50:05 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:09.389 10:50:05 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.389 10:50:05 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.389 10:50:05 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:09.389 10:50:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.389 10:50:05 -- setup/common.sh@40 -- # local part_no=1 00:04:09.389 10:50:05 -- setup/common.sh@41 -- # local size=1073741824 00:04:09.389 10:50:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.389 10:50:05 -- setup/common.sh@44 -- # parts=() 00:04:09.389 10:50:05 -- setup/common.sh@44 -- # local parts 00:04:09.389 10:50:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.389 10:50:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.389 10:50:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.389 10:50:05 -- setup/common.sh@46 -- # (( part++ )) 00:04:09.389 10:50:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.389 10:50:05 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:09.389 10:50:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.389 10:50:05 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.334 Creating new GPT entries in memory. 00:04:10.334 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.334 other utilities. 00:04:10.334 10:50:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.334 10:50:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.334 10:50:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.334 10:50:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.334 10:50:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:11.280 Creating new GPT entries in memory. 00:04:11.280 The operation has completed successfully. 00:04:11.280 10:50:07 -- setup/common.sh@57 -- # (( part++ )) 00:04:11.280 10:50:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.280 10:50:07 -- setup/common.sh@62 -- # wait 111469 00:04:11.280 10:50:07 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.280 10:50:07 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:11.280 10:50:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.280 10:50:07 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:11.280 10:50:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:11.280 10:50:07 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.280 10:50:07 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.280 10:50:07 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:11.280 10:50:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:11.280 10:50:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.280 10:50:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.280 10:50:07 -- setup/devices.sh@53 -- # local found=0 00:04:11.280 10:50:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.280 10:50:07 -- setup/devices.sh@56 -- # : 00:04:11.280 10:50:07 -- setup/devices.sh@59 -- # local pci status 00:04:11.280 10:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.280 10:50:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:11.280 10:50:07 -- setup/devices.sh@47 -- # setup output config 00:04:11.280 10:50:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.280 10:50:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:14.590 10:50:11 -- setup/devices.sh@63 -- # found=1 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.590 10:50:11 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.590 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.851 10:50:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.851 10:50:11 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:14.851 10:50:11 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.851 10:50:11 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.851 10:50:11 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.851 10:50:11 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:14.851 10:50:11 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.851 10:50:11 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.851 10:50:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.851 10:50:11 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.113 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.113 10:50:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.113 10:50:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.375 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.375 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.375 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.375 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.375 10:50:11 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:15.375 10:50:11 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:15.375 10:50:11 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.375 10:50:11 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:15.375 10:50:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:15.375 10:50:11 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.375 10:50:11 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.375 10:50:11 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:15.375 10:50:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:15.375 10:50:11 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.375 10:50:11 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.375 10:50:11 -- setup/devices.sh@53 -- # local found=0 00:04:15.375 10:50:11 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.375 10:50:11 -- setup/devices.sh@56 -- # : 00:04:15.375 10:50:11 -- setup/devices.sh@59 -- # local pci status 00:04:15.375 10:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.375 10:50:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:15.375 10:50:11 -- setup/devices.sh@47 -- # setup output config 00:04:15.375 10:50:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.375 10:50:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:18.683 10:50:15 -- setup/devices.sh@63 -- # found=1 00:04:18.683 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.683 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.683 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.684 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.684 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.684 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.684 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.684 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.684 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.684 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.684 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.684 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.684 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.684 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.684 10:50:15 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.684 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.945 10:50:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.945 10:50:15 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:18.945 10:50:15 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.945 10:50:15 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.945 10:50:15 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.945 10:50:15 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.945 10:50:15 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:18.945 10:50:15 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:18.945 10:50:15 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:18.945 10:50:15 -- setup/devices.sh@50 -- # local mount_point= 00:04:18.945 10:50:15 -- setup/devices.sh@51 -- # local test_file= 00:04:18.945 10:50:15 -- setup/devices.sh@53 -- # local found=0 00:04:18.945 10:50:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:18.945 10:50:15 -- setup/devices.sh@59 -- # local pci status 00:04:18.945 10:50:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.945 10:50:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:18.945 10:50:15 -- setup/devices.sh@47 -- # setup output config 00:04:18.945 10:50:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.945 10:50:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:22.252 10:50:18 -- setup/devices.sh@63 -- # found=1 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.252 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.252 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.253 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.253 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.253 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.253 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.253 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.253 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.253 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.253 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.253 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.253 10:50:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.253 10:50:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.514 10:50:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.514 10:50:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.514 10:50:19 -- setup/devices.sh@68 -- # return 0 00:04:22.514 10:50:19 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:22.514 10:50:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.514 10:50:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.514 10:50:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.514 10:50:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.775 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.775 00:04:22.775 real 0m13.540s 00:04:22.775 user 0m4.231s 00:04:22.775 sys 0m7.186s 00:04:22.775 10:50:19 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.775 10:50:19 -- common/autotest_common.sh@10 -- # set +x 00:04:22.775 ************************************ 00:04:22.775 END TEST nvme_mount 00:04:22.775 ************************************ 00:04:22.775 10:50:19 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:22.775 10:50:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.775 10:50:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.775 10:50:19 -- common/autotest_common.sh@10 -- # set +x 00:04:22.775 ************************************ 00:04:22.775 START TEST dm_mount 00:04:22.775 ************************************ 00:04:22.775 10:50:19 -- common/autotest_common.sh@1121 -- # dm_mount 00:04:22.775 10:50:19 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:22.775 10:50:19 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:22.775 10:50:19 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:22.775 10:50:19 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:22.775 10:50:19 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.775 10:50:19 -- setup/common.sh@40 -- # local part_no=2 00:04:22.775 10:50:19 -- setup/common.sh@41 -- # local size=1073741824 00:04:22.776 10:50:19 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.776 10:50:19 -- setup/common.sh@44 -- # parts=() 00:04:22.776 10:50:19 -- setup/common.sh@44 -- # local parts 00:04:22.776 10:50:19 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.776 10:50:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.776 10:50:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.776 10:50:19 -- setup/common.sh@46 -- # (( part++ )) 00:04:22.776 10:50:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.776 10:50:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.776 10:50:19 -- setup/common.sh@46 -- # (( part++ )) 00:04:22.776 10:50:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.776 10:50:19 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:22.776 10:50:19 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.776 10:50:19 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:23.721 Creating new GPT entries in memory. 00:04:23.721 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:23.721 other utilities. 00:04:23.721 10:50:20 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:23.721 10:50:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.721 10:50:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.721 10:50:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.721 10:50:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:24.664 Creating new GPT entries in memory. 00:04:24.664 The operation has completed successfully. 00:04:24.664 10:50:21 -- setup/common.sh@57 -- # (( part++ )) 00:04:24.664 10:50:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.664 10:50:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.664 10:50:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.664 10:50:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:26.049 The operation has completed successfully. 00:04:26.049 10:50:22 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.049 10:50:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.049 10:50:22 -- setup/common.sh@62 -- # wait 116662 00:04:26.049 10:50:22 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:26.049 10:50:22 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.049 10:50:22 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.049 10:50:22 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:26.049 10:50:22 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:26.049 10:50:22 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.049 10:50:22 -- setup/devices.sh@161 -- # break 00:04:26.049 10:50:22 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.049 10:50:22 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:26.049 10:50:22 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:26.049 10:50:22 -- setup/devices.sh@166 -- # dm=dm-0 00:04:26.049 10:50:22 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:26.049 10:50:22 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:26.049 10:50:22 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.049 10:50:22 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:26.049 10:50:22 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.049 10:50:22 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.049 10:50:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:26.049 10:50:22 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.049 10:50:22 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.049 10:50:22 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:26.049 10:50:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:26.049 10:50:22 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.049 10:50:22 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.049 10:50:22 -- setup/devices.sh@53 -- # local found=0 00:04:26.049 10:50:22 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:26.049 10:50:22 -- setup/devices.sh@56 -- # : 00:04:26.049 10:50:22 -- setup/devices.sh@59 -- # local pci status 00:04:26.049 10:50:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.049 10:50:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:26.049 10:50:22 -- setup/devices.sh@47 -- # setup output config 00:04:26.049 10:50:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.049 10:50:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:29.352 10:50:25 -- setup/devices.sh@63 -- # found=1 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.352 10:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.352 10:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.613 10:50:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.613 10:50:26 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:29.613 10:50:26 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.613 10:50:26 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.613 10:50:26 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.613 10:50:26 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.613 10:50:26 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:29.613 10:50:26 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.613 10:50:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:29.613 10:50:26 -- setup/devices.sh@50 -- # local mount_point= 00:04:29.613 10:50:26 -- setup/devices.sh@51 -- # local test_file= 00:04:29.613 10:50:26 -- setup/devices.sh@53 -- # local found=0 00:04:29.613 10:50:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.613 10:50:26 -- setup/devices.sh@59 -- # local pci status 00:04:29.613 10:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.613 10:50:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.613 10:50:26 -- setup/devices.sh@47 -- # setup output config 00:04:29.613 10:50:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.613 10:50:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:32.917 10:50:29 -- setup/devices.sh@63 -- # found=1 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.917 10:50:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.917 10:50:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.178 10:50:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.178 10:50:29 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.178 10:50:29 -- setup/devices.sh@68 -- # return 0 00:04:33.178 10:50:29 -- setup/devices.sh@187 -- # cleanup_dm 00:04:33.178 10:50:29 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.178 10:50:29 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.178 10:50:29 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:33.178 10:50:29 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.178 10:50:29 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:33.178 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.178 10:50:29 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.178 10:50:29 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:33.178 00:04:33.179 real 0m10.576s 00:04:33.179 user 0m2.833s 00:04:33.179 sys 0m4.781s 00:04:33.440 10:50:29 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.440 10:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:33.440 ************************************ 00:04:33.440 END TEST dm_mount 00:04:33.440 ************************************ 00:04:33.440 10:50:29 -- setup/devices.sh@1 -- # cleanup 00:04:33.440 10:50:29 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:33.440 10:50:29 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.440 10:50:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.440 10:50:29 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.440 10:50:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.440 10:50:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.701 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:33.701 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:33.701 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.701 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.701 10:50:30 -- setup/devices.sh@12 -- # cleanup_dm 00:04:33.701 10:50:30 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.701 10:50:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.701 10:50:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.701 10:50:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.701 10:50:30 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.701 10:50:30 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:33.701 00:04:33.701 real 0m28.750s 00:04:33.701 user 0m8.734s 00:04:33.701 sys 0m14.810s 00:04:33.701 10:50:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.701 10:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:33.701 ************************************ 00:04:33.701 END TEST devices 00:04:33.701 ************************************ 00:04:33.701 00:04:33.701 real 1m37.731s 00:04:33.701 user 0m33.558s 00:04:33.701 sys 0m55.230s 00:04:33.701 10:50:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.701 10:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:33.701 ************************************ 00:04:33.701 END TEST setup.sh 00:04:33.701 ************************************ 00:04:33.701 10:50:30 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:37.007 Hugepages 00:04:37.007 node hugesize free / total 00:04:37.007 node0 1048576kB 0 / 0 00:04:37.007 node0 2048kB 2048 / 2048 00:04:37.007 node1 1048576kB 0 / 0 00:04:37.007 node1 2048kB 0 / 0 00:04:37.007 00:04:37.007 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.007 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:37.007 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:37.007 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:37.007 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:37.007 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:37.007 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:37.007 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:37.007 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:37.268 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:37.268 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:37.268 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:37.268 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:37.268 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:37.268 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:37.268 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:37.268 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:37.268 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:37.268 10:50:33 -- spdk/autotest.sh@130 -- # uname -s 00:04:37.268 10:50:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:37.268 10:50:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:37.268 10:50:33 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.576 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.576 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.576 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.576 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.576 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.576 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.838 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.758 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:43.019 10:50:39 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:43.966 10:50:40 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:43.966 10:50:40 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:43.966 10:50:40 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:43.966 10:50:40 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:43.966 10:50:40 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:43.966 10:50:40 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:43.966 10:50:40 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.966 10:50:40 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.966 10:50:40 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:43.967 10:50:40 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:43.967 10:50:40 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:04:43.967 10:50:40 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.277 Waiting for block devices as requested 00:04:47.277 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:47.277 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:47.538 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:47.538 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:47.538 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:47.799 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:47.799 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:47.799 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:48.061 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:48.061 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:48.323 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:48.323 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:48.323 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:48.584 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:48.584 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:48.584 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:48.845 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:49.107 10:50:45 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:49.107 10:50:45 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1498 -- # grep 0000:65:00.0/nvme/nvme 00:04:49.107 10:50:45 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:49.107 10:50:45 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:49.107 10:50:45 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:49.107 10:50:45 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:49.107 10:50:45 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:04:49.107 10:50:45 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:49.107 10:50:45 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:49.107 10:50:45 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:49.107 10:50:45 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:49.107 10:50:45 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:49.107 10:50:45 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:49.107 10:50:45 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:49.107 10:50:45 -- common/autotest_common.sh@1553 -- # continue 00:04:49.107 10:50:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:49.107 10:50:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.107 10:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:49.107 10:50:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:49.107 10:50:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:49.107 10:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:49.107 10:50:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.415 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.415 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.415 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.415 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.415 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.415 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.677 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:52.939 10:50:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:52.939 10:50:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.939 10:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:53.201 10:50:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:53.201 10:50:49 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:53.201 10:50:49 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:53.201 10:50:49 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:53.201 10:50:49 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:53.201 10:50:49 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:53.201 10:50:49 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:53.201 10:50:49 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:53.201 10:50:49 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:53.201 10:50:49 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:53.201 10:50:49 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:53.201 10:50:49 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:53.201 10:50:49 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:04:53.201 10:50:49 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:53.201 10:50:49 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:53.201 10:50:49 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:04:53.201 10:50:49 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:53.201 10:50:49 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:04:53.201 10:50:49 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:04:53.201 10:50:49 -- common/autotest_common.sh@1589 -- # return 0 00:04:53.201 10:50:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:53.201 10:50:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:53.201 10:50:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:53.201 10:50:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:53.201 10:50:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:53.201 10:50:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:53.201 10:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:53.201 10:50:49 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.201 10:50:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.201 10:50:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.201 10:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:53.201 ************************************ 00:04:53.201 START TEST env 00:04:53.201 ************************************ 00:04:53.201 10:50:49 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.463 * Looking for test storage... 00:04:53.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:53.463 10:50:49 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.463 10:50:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.463 10:50:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.463 10:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:53.463 ************************************ 00:04:53.463 START TEST env_memory 00:04:53.463 ************************************ 00:04:53.463 10:50:49 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.463 00:04:53.463 00:04:53.463 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.463 http://cunit.sourceforge.net/ 00:04:53.463 00:04:53.463 00:04:53.463 Suite: memory 00:04:53.463 Test: alloc and free memory map ...[2024-05-15 10:50:49.971483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:53.463 passed 00:04:53.463 Test: mem map translation ...[2024-05-15 10:50:49.997131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:53.463 [2024-05-15 10:50:49.997167] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:53.463 [2024-05-15 10:50:49.997213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:53.463 [2024-05-15 10:50:49.997219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.463 passed 00:04:53.463 Test: mem map registration ...[2024-05-15 10:50:50.052950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:53.463 [2024-05-15 10:50:50.052980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:53.463 passed 00:04:53.727 Test: mem map adjacent registrations ...passed 00:04:53.727 00:04:53.727 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.727 suites 1 1 n/a 0 0 00:04:53.727 tests 4 4 4 0 0 00:04:53.727 asserts 152 152 152 0 n/a 00:04:53.727 00:04:53.727 Elapsed time = 0.195 seconds 00:04:53.727 00:04:53.727 real 0m0.209s 00:04:53.727 user 0m0.199s 00:04:53.727 sys 0m0.009s 00:04:53.727 10:50:50 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.727 10:50:50 -- common/autotest_common.sh@10 -- # set +x 00:04:53.727 ************************************ 00:04:53.727 END TEST env_memory 00:04:53.727 ************************************ 00:04:53.727 10:50:50 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.727 10:50:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.727 10:50:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.727 10:50:50 -- common/autotest_common.sh@10 -- # set +x 00:04:53.727 ************************************ 00:04:53.727 START TEST env_vtophys 00:04:53.727 ************************************ 00:04:53.727 10:50:50 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.727 EAL: lib.eal log level changed from notice to debug 00:04:53.727 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.727 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.727 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.727 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.727 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.727 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.727 EAL: Detected lcore 6 as core 6 on socket 0 00:04:53.727 EAL: Detected lcore 7 as core 7 on socket 0 00:04:53.727 EAL: Detected lcore 8 as core 8 on socket 0 00:04:53.727 EAL: Detected lcore 9 as core 9 on socket 0 00:04:53.727 EAL: Detected lcore 10 as core 10 on socket 0 00:04:53.727 EAL: Detected lcore 11 as core 11 on socket 0 00:04:53.727 EAL: Detected lcore 12 as core 12 on socket 0 00:04:53.727 EAL: Detected lcore 13 as core 13 on socket 0 00:04:53.727 EAL: Detected lcore 14 as core 14 on socket 0 00:04:53.727 EAL: Detected lcore 15 as core 15 on socket 0 00:04:53.727 EAL: Detected lcore 16 as core 16 on socket 0 00:04:53.727 EAL: Detected lcore 17 as core 17 on socket 0 00:04:53.727 EAL: Detected lcore 18 as core 18 on socket 0 00:04:53.727 EAL: Detected lcore 19 as core 19 on socket 0 00:04:53.727 EAL: Detected lcore 20 as core 20 on socket 0 00:04:53.727 EAL: Detected lcore 21 as core 21 on socket 0 00:04:53.727 EAL: Detected lcore 22 as core 22 on socket 0 00:04:53.727 EAL: Detected lcore 23 as core 23 on socket 0 00:04:53.727 EAL: Detected lcore 24 as core 24 on socket 0 00:04:53.727 EAL: Detected lcore 25 as core 25 on socket 0 00:04:53.727 EAL: Detected lcore 26 as core 26 on socket 0 00:04:53.727 EAL: Detected lcore 27 as core 27 on socket 0 00:04:53.727 EAL: Detected lcore 28 as core 28 on socket 0 00:04:53.727 EAL: Detected lcore 29 as core 29 on socket 0 00:04:53.727 EAL: Detected lcore 30 as core 30 on socket 0 00:04:53.727 EAL: Detected lcore 31 as core 31 on socket 0 00:04:53.727 EAL: Detected lcore 32 as core 32 on socket 0 00:04:53.727 EAL: Detected lcore 33 as core 33 on socket 0 00:04:53.727 EAL: Detected lcore 34 as core 34 on socket 0 00:04:53.727 EAL: Detected lcore 35 as core 35 on socket 0 00:04:53.727 EAL: Detected lcore 36 as core 0 on socket 1 00:04:53.727 EAL: Detected lcore 37 as core 1 on socket 1 00:04:53.727 EAL: Detected lcore 38 as core 2 on socket 1 00:04:53.727 EAL: Detected lcore 39 as core 3 on socket 1 00:04:53.727 EAL: Detected lcore 40 as core 4 on socket 1 00:04:53.727 EAL: Detected lcore 41 as core 5 on socket 1 00:04:53.727 EAL: Detected lcore 42 as core 6 on socket 1 00:04:53.727 EAL: Detected lcore 43 as core 7 on socket 1 00:04:53.727 EAL: Detected lcore 44 as core 8 on socket 1 00:04:53.727 EAL: Detected lcore 45 as core 9 on socket 1 00:04:53.727 EAL: Detected lcore 46 as core 10 on socket 1 00:04:53.727 EAL: Detected lcore 47 as core 11 on socket 1 00:04:53.727 EAL: Detected lcore 48 as core 12 on socket 1 00:04:53.727 EAL: Detected lcore 49 as core 13 on socket 1 00:04:53.727 EAL: Detected lcore 50 as core 14 on socket 1 00:04:53.727 EAL: Detected lcore 51 as core 15 on socket 1 00:04:53.727 EAL: Detected lcore 52 as core 16 on socket 1 00:04:53.727 EAL: Detected lcore 53 as core 17 on socket 1 00:04:53.727 EAL: Detected lcore 54 as core 18 on socket 1 00:04:53.727 EAL: Detected lcore 55 as core 19 on socket 1 00:04:53.727 EAL: Detected lcore 56 as core 20 on socket 1 00:04:53.727 EAL: Detected lcore 57 as core 21 on socket 1 00:04:53.727 EAL: Detected lcore 58 as core 22 on socket 1 00:04:53.727 EAL: Detected lcore 59 as core 23 on socket 1 00:04:53.727 EAL: Detected lcore 60 as core 24 on socket 1 00:04:53.727 EAL: Detected lcore 61 as core 25 on socket 1 00:04:53.727 EAL: Detected lcore 62 as core 26 on socket 1 00:04:53.727 EAL: Detected lcore 63 as core 27 on socket 1 00:04:53.727 EAL: Detected lcore 64 as core 28 on socket 1 00:04:53.727 EAL: Detected lcore 65 as core 29 on socket 1 00:04:53.727 EAL: Detected lcore 66 as core 30 on socket 1 00:04:53.727 EAL: Detected lcore 67 as core 31 on socket 1 00:04:53.727 EAL: Detected lcore 68 as core 32 on socket 1 00:04:53.727 EAL: Detected lcore 69 as core 33 on socket 1 00:04:53.727 EAL: Detected lcore 70 as core 34 on socket 1 00:04:53.727 EAL: Detected lcore 71 as core 35 on socket 1 00:04:53.727 EAL: Detected lcore 72 as core 0 on socket 0 00:04:53.727 EAL: Detected lcore 73 as core 1 on socket 0 00:04:53.727 EAL: Detected lcore 74 as core 2 on socket 0 00:04:53.727 EAL: Detected lcore 75 as core 3 on socket 0 00:04:53.727 EAL: Detected lcore 76 as core 4 on socket 0 00:04:53.727 EAL: Detected lcore 77 as core 5 on socket 0 00:04:53.727 EAL: Detected lcore 78 as core 6 on socket 0 00:04:53.727 EAL: Detected lcore 79 as core 7 on socket 0 00:04:53.727 EAL: Detected lcore 80 as core 8 on socket 0 00:04:53.727 EAL: Detected lcore 81 as core 9 on socket 0 00:04:53.727 EAL: Detected lcore 82 as core 10 on socket 0 00:04:53.727 EAL: Detected lcore 83 as core 11 on socket 0 00:04:53.727 EAL: Detected lcore 84 as core 12 on socket 0 00:04:53.727 EAL: Detected lcore 85 as core 13 on socket 0 00:04:53.727 EAL: Detected lcore 86 as core 14 on socket 0 00:04:53.727 EAL: Detected lcore 87 as core 15 on socket 0 00:04:53.727 EAL: Detected lcore 88 as core 16 on socket 0 00:04:53.727 EAL: Detected lcore 89 as core 17 on socket 0 00:04:53.727 EAL: Detected lcore 90 as core 18 on socket 0 00:04:53.727 EAL: Detected lcore 91 as core 19 on socket 0 00:04:53.727 EAL: Detected lcore 92 as core 20 on socket 0 00:04:53.727 EAL: Detected lcore 93 as core 21 on socket 0 00:04:53.727 EAL: Detected lcore 94 as core 22 on socket 0 00:04:53.727 EAL: Detected lcore 95 as core 23 on socket 0 00:04:53.727 EAL: Detected lcore 96 as core 24 on socket 0 00:04:53.727 EAL: Detected lcore 97 as core 25 on socket 0 00:04:53.727 EAL: Detected lcore 98 as core 26 on socket 0 00:04:53.727 EAL: Detected lcore 99 as core 27 on socket 0 00:04:53.727 EAL: Detected lcore 100 as core 28 on socket 0 00:04:53.727 EAL: Detected lcore 101 as core 29 on socket 0 00:04:53.727 EAL: Detected lcore 102 as core 30 on socket 0 00:04:53.727 EAL: Detected lcore 103 as core 31 on socket 0 00:04:53.727 EAL: Detected lcore 104 as core 32 on socket 0 00:04:53.727 EAL: Detected lcore 105 as core 33 on socket 0 00:04:53.727 EAL: Detected lcore 106 as core 34 on socket 0 00:04:53.727 EAL: Detected lcore 107 as core 35 on socket 0 00:04:53.727 EAL: Detected lcore 108 as core 0 on socket 1 00:04:53.727 EAL: Detected lcore 109 as core 1 on socket 1 00:04:53.727 EAL: Detected lcore 110 as core 2 on socket 1 00:04:53.727 EAL: Detected lcore 111 as core 3 on socket 1 00:04:53.727 EAL: Detected lcore 112 as core 4 on socket 1 00:04:53.727 EAL: Detected lcore 113 as core 5 on socket 1 00:04:53.727 EAL: Detected lcore 114 as core 6 on socket 1 00:04:53.727 EAL: Detected lcore 115 as core 7 on socket 1 00:04:53.727 EAL: Detected lcore 116 as core 8 on socket 1 00:04:53.727 EAL: Detected lcore 117 as core 9 on socket 1 00:04:53.727 EAL: Detected lcore 118 as core 10 on socket 1 00:04:53.727 EAL: Detected lcore 119 as core 11 on socket 1 00:04:53.727 EAL: Detected lcore 120 as core 12 on socket 1 00:04:53.727 EAL: Detected lcore 121 as core 13 on socket 1 00:04:53.727 EAL: Detected lcore 122 as core 14 on socket 1 00:04:53.727 EAL: Detected lcore 123 as core 15 on socket 1 00:04:53.727 EAL: Detected lcore 124 as core 16 on socket 1 00:04:53.727 EAL: Detected lcore 125 as core 17 on socket 1 00:04:53.727 EAL: Detected lcore 126 as core 18 on socket 1 00:04:53.727 EAL: Detected lcore 127 as core 19 on socket 1 00:04:53.727 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:53.727 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:53.727 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:53.727 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:53.727 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:53.727 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:53.727 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:53.727 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:53.727 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:53.727 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:53.727 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:53.727 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:53.727 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:53.727 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:53.727 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:53.727 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:53.727 EAL: Maximum logical cores by configuration: 128 00:04:53.727 EAL: Detected CPU lcores: 128 00:04:53.727 EAL: Detected NUMA nodes: 2 00:04:53.727 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:53.727 EAL: Detected shared linkage of DPDK 00:04:53.727 EAL: No shared files mode enabled, IPC will be disabled 00:04:53.727 EAL: Bus pci wants IOVA as 'DC' 00:04:53.727 EAL: Buses did not request a specific IOVA mode. 00:04:53.727 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:53.727 EAL: Selected IOVA mode 'VA' 00:04:53.727 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.727 EAL: Probing VFIO support... 00:04:53.727 EAL: IOMMU type 1 (Type 1) is supported 00:04:53.728 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:53.728 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:53.728 EAL: VFIO support initialized 00:04:53.728 EAL: Ask a virtual area of 0x2e000 bytes 00:04:53.728 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:53.728 EAL: Setting up physically contiguous memory... 00:04:53.728 EAL: Setting maximum number of open files to 524288 00:04:53.728 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:53.728 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:53.728 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:53.728 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:53.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.728 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:53.728 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.728 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:53.728 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:53.728 EAL: Hugepages will be freed exactly as allocated. 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: TSC frequency is ~2400000 KHz 00:04:53.728 EAL: Main lcore 0 is ready (tid=7f64cca32a00;cpuset=[0]) 00:04:53.728 EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.728 EAL: Restoring previous memory policy: 0 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was expanded by 2MB 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:53.728 EAL: Mem event callback 'spdk:(nil)' registered 00:04:53.728 00:04:53.728 00:04:53.728 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.728 http://cunit.sourceforge.net/ 00:04:53.728 00:04:53.728 00:04:53.728 Suite: components_suite 00:04:53.728 Test: vtophys_malloc_test ...passed 00:04:53.728 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.728 EAL: Restoring previous memory policy: 4 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was expanded by 4MB 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was shrunk by 4MB 00:04:53.728 EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.728 EAL: Restoring previous memory policy: 4 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was expanded by 6MB 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was shrunk by 6MB 00:04:53.728 EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.728 EAL: Restoring previous memory policy: 4 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.728 EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.728 EAL: Restoring previous memory policy: 4 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.728 EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.728 EAL: Restoring previous memory policy: 4 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.728 EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.728 EAL: Restoring previous memory policy: 4 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.728 EAL: request: mp_malloc_sync 00:04:53.728 EAL: No shared files mode enabled, IPC is disabled 00:04:53.728 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.728 EAL: Trying to obtain current memory policy. 00:04:53.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.990 EAL: Restoring previous memory policy: 4 00:04:53.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.990 EAL: request: mp_malloc_sync 00:04:53.990 EAL: No shared files mode enabled, IPC is disabled 00:04:53.990 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.990 EAL: request: mp_malloc_sync 00:04:53.990 EAL: No shared files mode enabled, IPC is disabled 00:04:53.990 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.990 EAL: Trying to obtain current memory policy. 00:04:53.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.990 EAL: Restoring previous memory policy: 4 00:04:53.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.990 EAL: request: mp_malloc_sync 00:04:53.990 EAL: No shared files mode enabled, IPC is disabled 00:04:53.990 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.990 EAL: request: mp_malloc_sync 00:04:53.990 EAL: No shared files mode enabled, IPC is disabled 00:04:53.990 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.990 EAL: Trying to obtain current memory policy. 00:04:53.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.990 EAL: Restoring previous memory policy: 4 00:04:53.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.990 EAL: request: mp_malloc_sync 00:04:53.990 EAL: No shared files mode enabled, IPC is disabled 00:04:53.990 EAL: Heap on socket 0 was expanded by 514MB 00:04:53.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.252 EAL: request: mp_malloc_sync 00:04:54.252 EAL: No shared files mode enabled, IPC is disabled 00:04:54.252 EAL: Heap on socket 0 was shrunk by 514MB 00:04:54.252 EAL: Trying to obtain current memory policy. 00:04:54.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.252 EAL: Restoring previous memory policy: 4 00:04:54.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.252 EAL: request: mp_malloc_sync 00:04:54.252 EAL: No shared files mode enabled, IPC is disabled 00:04:54.252 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.514 EAL: request: mp_malloc_sync 00:04:54.515 EAL: No shared files mode enabled, IPC is disabled 00:04:54.515 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.515 passed 00:04:54.515 00:04:54.515 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.515 suites 1 1 n/a 0 0 00:04:54.515 tests 2 2 2 0 0 00:04:54.515 asserts 497 497 497 0 n/a 00:04:54.515 00:04:54.515 Elapsed time = 0.689 seconds 00:04:54.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.515 EAL: request: mp_malloc_sync 00:04:54.515 EAL: No shared files mode enabled, IPC is disabled 00:04:54.515 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.515 EAL: No shared files mode enabled, IPC is disabled 00:04:54.515 EAL: No shared files mode enabled, IPC is disabled 00:04:54.515 EAL: No shared files mode enabled, IPC is disabled 00:04:54.515 00:04:54.515 real 0m0.829s 00:04:54.515 user 0m0.426s 00:04:54.515 sys 0m0.374s 00:04:54.515 10:50:51 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.515 10:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.515 ************************************ 00:04:54.515 END TEST env_vtophys 00:04:54.515 ************************************ 00:04:54.515 10:50:51 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.515 10:50:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.515 10:50:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.515 10:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.515 ************************************ 00:04:54.515 START TEST env_pci 00:04:54.515 ************************************ 00:04:54.515 10:50:51 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.515 00:04:54.515 00:04:54.515 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.515 http://cunit.sourceforge.net/ 00:04:54.515 00:04:54.515 00:04:54.515 Suite: pci 00:04:54.515 Test: pci_hook ...[2024-05-15 10:50:51.142926] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 127727 has claimed it 00:04:54.778 EAL: Cannot find device (10000:00:01.0) 00:04:54.778 EAL: Failed to attach device on primary process 00:04:54.778 passed 00:04:54.778 00:04:54.778 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.778 suites 1 1 n/a 0 0 00:04:54.778 tests 1 1 1 0 0 00:04:54.778 asserts 25 25 25 0 n/a 00:04:54.778 00:04:54.778 Elapsed time = 0.030 seconds 00:04:54.778 00:04:54.778 real 0m0.050s 00:04:54.778 user 0m0.018s 00:04:54.778 sys 0m0.032s 00:04:54.778 10:50:51 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.778 10:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 ************************************ 00:04:54.778 END TEST env_pci 00:04:54.778 ************************************ 00:04:54.778 10:50:51 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.778 10:50:51 -- env/env.sh@15 -- # uname 00:04:54.778 10:50:51 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.778 10:50:51 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.778 10:50:51 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.778 10:50:51 -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:54.778 10:50:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.778 10:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 ************************************ 00:04:54.778 START TEST env_dpdk_post_init 00:04:54.778 ************************************ 00:04:54.778 10:50:51 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.778 EAL: Detected CPU lcores: 128 00:04:54.778 EAL: Detected NUMA nodes: 2 00:04:54.778 EAL: Detected shared linkage of DPDK 00:04:54.778 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.778 EAL: Selected IOVA mode 'VA' 00:04:54.778 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.778 EAL: VFIO support initialized 00:04:54.778 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.778 EAL: Using IOMMU type 1 (Type 1) 00:04:55.040 EAL: Ignore mapping IO port bar(1) 00:04:55.040 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:55.303 EAL: Ignore mapping IO port bar(1) 00:04:55.303 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:55.564 EAL: Ignore mapping IO port bar(1) 00:04:55.564 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:55.564 EAL: Ignore mapping IO port bar(1) 00:04:55.826 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:55.826 EAL: Ignore mapping IO port bar(1) 00:04:56.087 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:56.087 EAL: Ignore mapping IO port bar(1) 00:04:56.350 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:56.350 EAL: Ignore mapping IO port bar(1) 00:04:56.350 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:56.625 EAL: Ignore mapping IO port bar(1) 00:04:56.625 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:56.892 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:57.175 EAL: Ignore mapping IO port bar(1) 00:04:57.175 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:57.175 EAL: Ignore mapping IO port bar(1) 00:04:57.437 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:57.437 EAL: Ignore mapping IO port bar(1) 00:04:57.699 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:57.699 EAL: Ignore mapping IO port bar(1) 00:04:57.699 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:57.960 EAL: Ignore mapping IO port bar(1) 00:04:57.960 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:58.223 EAL: Ignore mapping IO port bar(1) 00:04:58.223 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:58.485 EAL: Ignore mapping IO port bar(1) 00:04:58.485 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:58.485 EAL: Ignore mapping IO port bar(1) 00:04:58.749 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:58.749 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:58.749 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:58.749 Starting DPDK initialization... 00:04:58.749 Starting SPDK post initialization... 00:04:58.749 SPDK NVMe probe 00:04:58.749 Attaching to 0000:65:00.0 00:04:58.749 Attached to 0000:65:00.0 00:04:58.749 Cleaning up... 00:05:00.667 00:05:00.667 real 0m5.749s 00:05:00.667 user 0m0.185s 00:05:00.667 sys 0m0.109s 00:05:00.667 10:50:57 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.667 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.667 ************************************ 00:05:00.667 END TEST env_dpdk_post_init 00:05:00.667 ************************************ 00:05:00.667 10:50:57 -- env/env.sh@26 -- # uname 00:05:00.667 10:50:57 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.667 10:50:57 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.667 10:50:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.667 10:50:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.667 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.667 ************************************ 00:05:00.667 START TEST env_mem_callbacks 00:05:00.667 ************************************ 00:05:00.667 10:50:57 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.667 EAL: Detected CPU lcores: 128 00:05:00.667 EAL: Detected NUMA nodes: 2 00:05:00.667 EAL: Detected shared linkage of DPDK 00:05:00.667 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.667 EAL: Selected IOVA mode 'VA' 00:05:00.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.667 EAL: VFIO support initialized 00:05:00.667 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.667 00:05:00.667 00:05:00.667 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.667 http://cunit.sourceforge.net/ 00:05:00.667 00:05:00.667 00:05:00.667 Suite: memory 00:05:00.667 Test: test ... 00:05:00.667 register 0x200000200000 2097152 00:05:00.667 malloc 3145728 00:05:00.667 register 0x200000400000 4194304 00:05:00.667 buf 0x200000500000 len 3145728 PASSED 00:05:00.667 malloc 64 00:05:00.667 buf 0x2000004fff40 len 64 PASSED 00:05:00.667 malloc 4194304 00:05:00.667 register 0x200000800000 6291456 00:05:00.667 buf 0x200000a00000 len 4194304 PASSED 00:05:00.667 free 0x200000500000 3145728 00:05:00.667 free 0x2000004fff40 64 00:05:00.667 unregister 0x200000400000 4194304 PASSED 00:05:00.667 free 0x200000a00000 4194304 00:05:00.667 unregister 0x200000800000 6291456 PASSED 00:05:00.667 malloc 8388608 00:05:00.667 register 0x200000400000 10485760 00:05:00.667 buf 0x200000600000 len 8388608 PASSED 00:05:00.667 free 0x200000600000 8388608 00:05:00.667 unregister 0x200000400000 10485760 PASSED 00:05:00.667 passed 00:05:00.667 00:05:00.667 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.667 suites 1 1 n/a 0 0 00:05:00.667 tests 1 1 1 0 0 00:05:00.667 asserts 15 15 15 0 n/a 00:05:00.667 00:05:00.667 Elapsed time = 0.010 seconds 00:05:00.667 00:05:00.667 real 0m0.069s 00:05:00.667 user 0m0.023s 00:05:00.667 sys 0m0.045s 00:05:00.667 10:50:57 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.667 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.668 ************************************ 00:05:00.668 END TEST env_mem_callbacks 00:05:00.668 ************************************ 00:05:00.668 00:05:00.668 real 0m7.449s 00:05:00.668 user 0m1.050s 00:05:00.668 sys 0m0.926s 00:05:00.668 10:50:57 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.668 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.668 ************************************ 00:05:00.668 END TEST env 00:05:00.668 ************************************ 00:05:00.668 10:50:57 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.668 10:50:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.668 10:50:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.668 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.668 ************************************ 00:05:00.668 START TEST rpc 00:05:00.668 ************************************ 00:05:00.668 10:50:57 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.930 * Looking for test storage... 00:05:00.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.930 10:50:57 -- rpc/rpc.sh@65 -- # spdk_pid=129168 00:05:00.930 10:50:57 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.930 10:50:57 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:00.930 10:50:57 -- rpc/rpc.sh@67 -- # waitforlisten 129168 00:05:00.930 10:50:57 -- common/autotest_common.sh@827 -- # '[' -z 129168 ']' 00:05:00.930 10:50:57 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.930 10:50:57 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.930 10:50:57 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.930 10:50:57 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.930 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.930 [2024-05-15 10:50:57.472909] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:00.930 [2024-05-15 10:50:57.472977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129168 ] 00:05:00.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.930 [2024-05-15 10:50:57.554626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.192 [2024-05-15 10:50:57.648980] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.192 [2024-05-15 10:50:57.649042] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 129168' to capture a snapshot of events at runtime. 00:05:01.192 [2024-05-15 10:50:57.649051] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.192 [2024-05-15 10:50:57.649059] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.192 [2024-05-15 10:50:57.649065] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid129168 for offline analysis/debug. 00:05:01.192 [2024-05-15 10:50:57.649090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.766 10:50:58 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:01.766 10:50:58 -- common/autotest_common.sh@860 -- # return 0 00:05:01.766 10:50:58 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.766 10:50:58 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.766 10:50:58 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:01.766 10:50:58 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:01.766 10:50:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.766 10:50:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.766 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.766 ************************************ 00:05:01.766 START TEST rpc_integrity 00:05:01.766 ************************************ 00:05:01.766 10:50:58 -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:01.766 10:50:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.766 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.766 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.766 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.766 10:50:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.766 10:50:58 -- rpc/rpc.sh@13 -- # jq length 00:05:01.766 10:50:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.766 10:50:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.766 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.766 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.766 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.766 10:50:58 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:01.766 10:50:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.766 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.766 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.766 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.766 10:50:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.766 { 00:05:01.766 "name": "Malloc0", 00:05:01.766 "aliases": [ 00:05:01.766 "af30bc2b-59b5-4dce-b58e-d1db35056a3f" 00:05:01.766 ], 00:05:01.766 "product_name": "Malloc disk", 00:05:01.766 "block_size": 512, 00:05:01.766 "num_blocks": 16384, 00:05:01.766 "uuid": "af30bc2b-59b5-4dce-b58e-d1db35056a3f", 00:05:01.766 "assigned_rate_limits": { 00:05:01.766 "rw_ios_per_sec": 0, 00:05:01.766 "rw_mbytes_per_sec": 0, 00:05:01.766 "r_mbytes_per_sec": 0, 00:05:01.766 "w_mbytes_per_sec": 0 00:05:01.766 }, 00:05:01.766 "claimed": false, 00:05:01.766 "zoned": false, 00:05:01.766 "supported_io_types": { 00:05:01.766 "read": true, 00:05:01.766 "write": true, 00:05:01.766 "unmap": true, 00:05:01.766 "write_zeroes": true, 00:05:01.766 "flush": true, 00:05:01.766 "reset": true, 00:05:01.766 "compare": false, 00:05:01.766 "compare_and_write": false, 00:05:01.766 "abort": true, 00:05:01.766 "nvme_admin": false, 00:05:01.766 "nvme_io": false 00:05:01.766 }, 00:05:01.766 "memory_domains": [ 00:05:01.766 { 00:05:01.766 "dma_device_id": "system", 00:05:01.766 "dma_device_type": 1 00:05:01.766 }, 00:05:01.766 { 00:05:01.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.766 "dma_device_type": 2 00:05:01.766 } 00:05:01.766 ], 00:05:01.766 "driver_specific": {} 00:05:01.766 } 00:05:01.766 ]' 00:05:01.766 10:50:58 -- rpc/rpc.sh@17 -- # jq length 00:05:02.029 10:50:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.029 10:50:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:02.029 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.029 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.029 [2024-05-15 10:50:58.429984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:02.029 [2024-05-15 10:50:58.430029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.029 [2024-05-15 10:50:58.430052] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c52d20 00:05:02.029 [2024-05-15 10:50:58.430061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.029 [2024-05-15 10:50:58.431632] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.029 [2024-05-15 10:50:58.431668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.029 Passthru0 00:05:02.029 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.029 10:50:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.029 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.029 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.029 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.029 10:50:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.029 { 00:05:02.029 "name": "Malloc0", 00:05:02.029 "aliases": [ 00:05:02.029 "af30bc2b-59b5-4dce-b58e-d1db35056a3f" 00:05:02.029 ], 00:05:02.029 "product_name": "Malloc disk", 00:05:02.029 "block_size": 512, 00:05:02.029 "num_blocks": 16384, 00:05:02.029 "uuid": "af30bc2b-59b5-4dce-b58e-d1db35056a3f", 00:05:02.029 "assigned_rate_limits": { 00:05:02.029 "rw_ios_per_sec": 0, 00:05:02.029 "rw_mbytes_per_sec": 0, 00:05:02.029 "r_mbytes_per_sec": 0, 00:05:02.029 "w_mbytes_per_sec": 0 00:05:02.029 }, 00:05:02.029 "claimed": true, 00:05:02.029 "claim_type": "exclusive_write", 00:05:02.029 "zoned": false, 00:05:02.029 "supported_io_types": { 00:05:02.029 "read": true, 00:05:02.029 "write": true, 00:05:02.029 "unmap": true, 00:05:02.029 "write_zeroes": true, 00:05:02.029 "flush": true, 00:05:02.029 "reset": true, 00:05:02.029 "compare": false, 00:05:02.029 "compare_and_write": false, 00:05:02.029 "abort": true, 00:05:02.029 "nvme_admin": false, 00:05:02.029 "nvme_io": false 00:05:02.029 }, 00:05:02.029 "memory_domains": [ 00:05:02.029 { 00:05:02.029 "dma_device_id": "system", 00:05:02.029 "dma_device_type": 1 00:05:02.029 }, 00:05:02.029 { 00:05:02.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.029 "dma_device_type": 2 00:05:02.029 } 00:05:02.029 ], 00:05:02.029 "driver_specific": {} 00:05:02.029 }, 00:05:02.029 { 00:05:02.029 "name": "Passthru0", 00:05:02.029 "aliases": [ 00:05:02.029 "f9dfc244-ca32-56c9-9e8c-589b0e95bef8" 00:05:02.029 ], 00:05:02.030 "product_name": "passthru", 00:05:02.030 "block_size": 512, 00:05:02.030 "num_blocks": 16384, 00:05:02.030 "uuid": "f9dfc244-ca32-56c9-9e8c-589b0e95bef8", 00:05:02.030 "assigned_rate_limits": { 00:05:02.030 "rw_ios_per_sec": 0, 00:05:02.030 "rw_mbytes_per_sec": 0, 00:05:02.030 "r_mbytes_per_sec": 0, 00:05:02.030 "w_mbytes_per_sec": 0 00:05:02.030 }, 00:05:02.030 "claimed": false, 00:05:02.030 "zoned": false, 00:05:02.030 "supported_io_types": { 00:05:02.030 "read": true, 00:05:02.030 "write": true, 00:05:02.030 "unmap": true, 00:05:02.030 "write_zeroes": true, 00:05:02.030 "flush": true, 00:05:02.030 "reset": true, 00:05:02.030 "compare": false, 00:05:02.030 "compare_and_write": false, 00:05:02.030 "abort": true, 00:05:02.030 "nvme_admin": false, 00:05:02.030 "nvme_io": false 00:05:02.030 }, 00:05:02.030 "memory_domains": [ 00:05:02.030 { 00:05:02.030 "dma_device_id": "system", 00:05:02.030 "dma_device_type": 1 00:05:02.030 }, 00:05:02.030 { 00:05:02.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.030 "dma_device_type": 2 00:05:02.030 } 00:05:02.030 ], 00:05:02.030 "driver_specific": { 00:05:02.030 "passthru": { 00:05:02.030 "name": "Passthru0", 00:05:02.030 "base_bdev_name": "Malloc0" 00:05:02.030 } 00:05:02.030 } 00:05:02.030 } 00:05:02.030 ]' 00:05:02.030 10:50:58 -- rpc/rpc.sh@21 -- # jq length 00:05:02.030 10:50:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.030 10:50:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.030 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.030 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.030 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.030 10:50:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.030 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.030 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.030 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.030 10:50:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.030 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.030 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.030 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.030 10:50:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.030 10:50:58 -- rpc/rpc.sh@26 -- # jq length 00:05:02.030 10:50:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.030 00:05:02.030 real 0m0.295s 00:05:02.030 user 0m0.192s 00:05:02.030 sys 0m0.037s 00:05:02.030 10:50:58 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.030 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.030 ************************************ 00:05:02.030 END TEST rpc_integrity 00:05:02.030 ************************************ 00:05:02.030 10:50:58 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:02.030 10:50:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.030 10:50:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.030 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.030 ************************************ 00:05:02.030 START TEST rpc_plugins 00:05:02.030 ************************************ 00:05:02.030 10:50:58 -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:02.030 10:50:58 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.030 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.030 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.292 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.292 10:50:58 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.292 10:50:58 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.292 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.292 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.292 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.292 10:50:58 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.292 { 00:05:02.292 "name": "Malloc1", 00:05:02.292 "aliases": [ 00:05:02.292 "33f541d1-8e71-4e53-89b4-34dce40bc2ac" 00:05:02.292 ], 00:05:02.292 "product_name": "Malloc disk", 00:05:02.292 "block_size": 4096, 00:05:02.292 "num_blocks": 256, 00:05:02.292 "uuid": "33f541d1-8e71-4e53-89b4-34dce40bc2ac", 00:05:02.292 "assigned_rate_limits": { 00:05:02.292 "rw_ios_per_sec": 0, 00:05:02.292 "rw_mbytes_per_sec": 0, 00:05:02.292 "r_mbytes_per_sec": 0, 00:05:02.292 "w_mbytes_per_sec": 0 00:05:02.292 }, 00:05:02.292 "claimed": false, 00:05:02.292 "zoned": false, 00:05:02.292 "supported_io_types": { 00:05:02.292 "read": true, 00:05:02.292 "write": true, 00:05:02.292 "unmap": true, 00:05:02.292 "write_zeroes": true, 00:05:02.292 "flush": true, 00:05:02.292 "reset": true, 00:05:02.292 "compare": false, 00:05:02.292 "compare_and_write": false, 00:05:02.292 "abort": true, 00:05:02.292 "nvme_admin": false, 00:05:02.292 "nvme_io": false 00:05:02.292 }, 00:05:02.292 "memory_domains": [ 00:05:02.292 { 00:05:02.292 "dma_device_id": "system", 00:05:02.292 "dma_device_type": 1 00:05:02.292 }, 00:05:02.292 { 00:05:02.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.292 "dma_device_type": 2 00:05:02.292 } 00:05:02.292 ], 00:05:02.292 "driver_specific": {} 00:05:02.292 } 00:05:02.292 ]' 00:05:02.292 10:50:58 -- rpc/rpc.sh@32 -- # jq length 00:05:02.292 10:50:58 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.292 10:50:58 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.292 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.292 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.292 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.292 10:50:58 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:02.292 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.292 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.292 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.292 10:50:58 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:02.292 10:50:58 -- rpc/rpc.sh@36 -- # jq length 00:05:02.292 10:50:58 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.292 00:05:02.292 real 0m0.150s 00:05:02.292 user 0m0.095s 00:05:02.292 sys 0m0.017s 00:05:02.292 10:50:58 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.292 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.292 ************************************ 00:05:02.292 END TEST rpc_plugins 00:05:02.292 ************************************ 00:05:02.292 10:50:58 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.292 10:50:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.292 10:50:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.292 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.292 ************************************ 00:05:02.292 START TEST rpc_trace_cmd_test 00:05:02.292 ************************************ 00:05:02.292 10:50:58 -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:02.292 10:50:58 -- rpc/rpc.sh@40 -- # local info 00:05:02.292 10:50:58 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.292 10:50:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.292 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.292 10:50:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.292 10:50:58 -- rpc/rpc.sh@42 -- # info='{ 00:05:02.292 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid129168", 00:05:02.292 "tpoint_group_mask": "0x8", 00:05:02.292 "iscsi_conn": { 00:05:02.292 "mask": "0x2", 00:05:02.292 "tpoint_mask": "0x0" 00:05:02.292 }, 00:05:02.292 "scsi": { 00:05:02.292 "mask": "0x4", 00:05:02.292 "tpoint_mask": "0x0" 00:05:02.292 }, 00:05:02.292 "bdev": { 00:05:02.292 "mask": "0x8", 00:05:02.292 "tpoint_mask": "0xffffffffffffffff" 00:05:02.292 }, 00:05:02.292 "nvmf_rdma": { 00:05:02.292 "mask": "0x10", 00:05:02.292 "tpoint_mask": "0x0" 00:05:02.292 }, 00:05:02.292 "nvmf_tcp": { 00:05:02.292 "mask": "0x20", 00:05:02.292 "tpoint_mask": "0x0" 00:05:02.292 }, 00:05:02.292 "ftl": { 00:05:02.292 "mask": "0x40", 00:05:02.292 "tpoint_mask": "0x0" 00:05:02.292 }, 00:05:02.293 "blobfs": { 00:05:02.293 "mask": "0x80", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 }, 00:05:02.293 "dsa": { 00:05:02.293 "mask": "0x200", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 }, 00:05:02.293 "thread": { 00:05:02.293 "mask": "0x400", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 }, 00:05:02.293 "nvme_pcie": { 00:05:02.293 "mask": "0x800", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 }, 00:05:02.293 "iaa": { 00:05:02.293 "mask": "0x1000", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 }, 00:05:02.293 "nvme_tcp": { 00:05:02.293 "mask": "0x2000", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 }, 00:05:02.293 "bdev_nvme": { 00:05:02.293 "mask": "0x4000", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 }, 00:05:02.293 "sock": { 00:05:02.293 "mask": "0x8000", 00:05:02.293 "tpoint_mask": "0x0" 00:05:02.293 } 00:05:02.293 }' 00:05:02.293 10:50:58 -- rpc/rpc.sh@43 -- # jq length 00:05:02.554 10:50:58 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:02.554 10:50:58 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:02.554 10:50:59 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:02.554 10:50:59 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:02.554 10:50:59 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:02.554 10:50:59 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:02.554 10:50:59 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:02.554 10:50:59 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:02.554 10:50:59 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:02.554 00:05:02.554 real 0m0.249s 00:05:02.554 user 0m0.207s 00:05:02.554 sys 0m0.034s 00:05:02.554 10:50:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.554 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.554 ************************************ 00:05:02.554 END TEST rpc_trace_cmd_test 00:05:02.554 ************************************ 00:05:02.554 10:50:59 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:02.554 10:50:59 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:02.554 10:50:59 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:02.554 10:50:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.554 10:50:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.554 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.816 ************************************ 00:05:02.816 START TEST rpc_daemon_integrity 00:05:02.816 ************************************ 00:05:02.816 10:50:59 -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:02.816 10:50:59 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.816 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.816 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.816 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.816 10:50:59 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.816 10:50:59 -- rpc/rpc.sh@13 -- # jq length 00:05:02.816 10:50:59 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.816 10:50:59 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.816 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.816 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.816 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.816 10:50:59 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:02.816 10:50:59 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.816 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.816 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.816 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.816 10:50:59 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.816 { 00:05:02.816 "name": "Malloc2", 00:05:02.816 "aliases": [ 00:05:02.816 "344b7239-92e7-4cd6-9434-ecc8c2a2f253" 00:05:02.816 ], 00:05:02.816 "product_name": "Malloc disk", 00:05:02.816 "block_size": 512, 00:05:02.816 "num_blocks": 16384, 00:05:02.816 "uuid": "344b7239-92e7-4cd6-9434-ecc8c2a2f253", 00:05:02.816 "assigned_rate_limits": { 00:05:02.816 "rw_ios_per_sec": 0, 00:05:02.816 "rw_mbytes_per_sec": 0, 00:05:02.816 "r_mbytes_per_sec": 0, 00:05:02.816 "w_mbytes_per_sec": 0 00:05:02.816 }, 00:05:02.816 "claimed": false, 00:05:02.816 "zoned": false, 00:05:02.816 "supported_io_types": { 00:05:02.816 "read": true, 00:05:02.816 "write": true, 00:05:02.816 "unmap": true, 00:05:02.816 "write_zeroes": true, 00:05:02.816 "flush": true, 00:05:02.816 "reset": true, 00:05:02.816 "compare": false, 00:05:02.816 "compare_and_write": false, 00:05:02.816 "abort": true, 00:05:02.816 "nvme_admin": false, 00:05:02.816 "nvme_io": false 00:05:02.816 }, 00:05:02.816 "memory_domains": [ 00:05:02.816 { 00:05:02.816 "dma_device_id": "system", 00:05:02.816 "dma_device_type": 1 00:05:02.816 }, 00:05:02.816 { 00:05:02.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.816 "dma_device_type": 2 00:05:02.816 } 00:05:02.816 ], 00:05:02.816 "driver_specific": {} 00:05:02.816 } 00:05:02.816 ]' 00:05:02.816 10:50:59 -- rpc/rpc.sh@17 -- # jq length 00:05:02.816 10:50:59 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.816 10:50:59 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:02.816 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.816 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.816 [2024-05-15 10:50:59.380549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:02.816 [2024-05-15 10:50:59.380591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.816 [2024-05-15 10:50:59.380608] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dfbe40 00:05:02.816 [2024-05-15 10:50:59.380615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.816 [2024-05-15 10:50:59.381989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.816 [2024-05-15 10:50:59.382023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.816 Passthru0 00:05:02.816 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.816 10:50:59 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.816 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.816 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.816 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.816 10:50:59 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.816 { 00:05:02.816 "name": "Malloc2", 00:05:02.816 "aliases": [ 00:05:02.816 "344b7239-92e7-4cd6-9434-ecc8c2a2f253" 00:05:02.816 ], 00:05:02.816 "product_name": "Malloc disk", 00:05:02.816 "block_size": 512, 00:05:02.816 "num_blocks": 16384, 00:05:02.816 "uuid": "344b7239-92e7-4cd6-9434-ecc8c2a2f253", 00:05:02.816 "assigned_rate_limits": { 00:05:02.816 "rw_ios_per_sec": 0, 00:05:02.816 "rw_mbytes_per_sec": 0, 00:05:02.816 "r_mbytes_per_sec": 0, 00:05:02.816 "w_mbytes_per_sec": 0 00:05:02.816 }, 00:05:02.816 "claimed": true, 00:05:02.816 "claim_type": "exclusive_write", 00:05:02.816 "zoned": false, 00:05:02.816 "supported_io_types": { 00:05:02.816 "read": true, 00:05:02.816 "write": true, 00:05:02.816 "unmap": true, 00:05:02.816 "write_zeroes": true, 00:05:02.816 "flush": true, 00:05:02.816 "reset": true, 00:05:02.816 "compare": false, 00:05:02.816 "compare_and_write": false, 00:05:02.817 "abort": true, 00:05:02.817 "nvme_admin": false, 00:05:02.817 "nvme_io": false 00:05:02.817 }, 00:05:02.817 "memory_domains": [ 00:05:02.817 { 00:05:02.817 "dma_device_id": "system", 00:05:02.817 "dma_device_type": 1 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.817 "dma_device_type": 2 00:05:02.817 } 00:05:02.817 ], 00:05:02.817 "driver_specific": {} 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "name": "Passthru0", 00:05:02.817 "aliases": [ 00:05:02.817 "4a840b8e-95d2-5868-888e-8e40cdabac51" 00:05:02.817 ], 00:05:02.817 "product_name": "passthru", 00:05:02.817 "block_size": 512, 00:05:02.817 "num_blocks": 16384, 00:05:02.817 "uuid": "4a840b8e-95d2-5868-888e-8e40cdabac51", 00:05:02.817 "assigned_rate_limits": { 00:05:02.817 "rw_ios_per_sec": 0, 00:05:02.817 "rw_mbytes_per_sec": 0, 00:05:02.817 "r_mbytes_per_sec": 0, 00:05:02.817 "w_mbytes_per_sec": 0 00:05:02.817 }, 00:05:02.817 "claimed": false, 00:05:02.817 "zoned": false, 00:05:02.817 "supported_io_types": { 00:05:02.817 "read": true, 00:05:02.817 "write": true, 00:05:02.817 "unmap": true, 00:05:02.817 "write_zeroes": true, 00:05:02.817 "flush": true, 00:05:02.817 "reset": true, 00:05:02.817 "compare": false, 00:05:02.817 "compare_and_write": false, 00:05:02.817 "abort": true, 00:05:02.817 "nvme_admin": false, 00:05:02.817 "nvme_io": false 00:05:02.817 }, 00:05:02.817 "memory_domains": [ 00:05:02.817 { 00:05:02.817 "dma_device_id": "system", 00:05:02.817 "dma_device_type": 1 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.817 "dma_device_type": 2 00:05:02.817 } 00:05:02.817 ], 00:05:02.817 "driver_specific": { 00:05:02.817 "passthru": { 00:05:02.817 "name": "Passthru0", 00:05:02.817 "base_bdev_name": "Malloc2" 00:05:02.817 } 00:05:02.817 } 00:05:02.817 } 00:05:02.817 ]' 00:05:02.817 10:50:59 -- rpc/rpc.sh@21 -- # jq length 00:05:02.817 10:50:59 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.817 10:50:59 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.817 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.817 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.817 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.817 10:50:59 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:02.817 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.817 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.078 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.078 10:50:59 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.078 10:50:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.078 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.078 10:50:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.078 10:50:59 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.078 10:50:59 -- rpc/rpc.sh@26 -- # jq length 00:05:03.078 10:50:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.078 00:05:03.078 real 0m0.291s 00:05:03.078 user 0m0.185s 00:05:03.078 sys 0m0.039s 00:05:03.078 10:50:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.078 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.078 ************************************ 00:05:03.078 END TEST rpc_daemon_integrity 00:05:03.078 ************************************ 00:05:03.078 10:50:59 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.078 10:50:59 -- rpc/rpc.sh@84 -- # killprocess 129168 00:05:03.078 10:50:59 -- common/autotest_common.sh@946 -- # '[' -z 129168 ']' 00:05:03.078 10:50:59 -- common/autotest_common.sh@950 -- # kill -0 129168 00:05:03.079 10:50:59 -- common/autotest_common.sh@951 -- # uname 00:05:03.079 10:50:59 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:03.079 10:50:59 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 129168 00:05:03.079 10:50:59 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:03.079 10:50:59 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:03.079 10:50:59 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 129168' 00:05:03.079 killing process with pid 129168 00:05:03.079 10:50:59 -- common/autotest_common.sh@965 -- # kill 129168 00:05:03.079 10:50:59 -- common/autotest_common.sh@970 -- # wait 129168 00:05:03.340 00:05:03.340 real 0m2.572s 00:05:03.340 user 0m3.290s 00:05:03.340 sys 0m0.801s 00:05:03.340 10:50:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.340 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.340 ************************************ 00:05:03.340 END TEST rpc 00:05:03.340 ************************************ 00:05:03.340 10:50:59 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.340 10:50:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.340 10:50:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.340 10:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.340 ************************************ 00:05:03.340 START TEST skip_rpc 00:05:03.340 ************************************ 00:05:03.340 10:50:59 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.602 * Looking for test storage... 00:05:03.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.602 10:51:00 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.602 10:51:00 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:03.602 10:51:00 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:03.602 10:51:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.602 10:51:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.602 10:51:00 -- common/autotest_common.sh@10 -- # set +x 00:05:03.602 ************************************ 00:05:03.602 START TEST skip_rpc 00:05:03.602 ************************************ 00:05:03.602 10:51:00 -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:03.602 10:51:00 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=129895 00:05:03.602 10:51:00 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.602 10:51:00 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:03.602 10:51:00 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:03.602 [2024-05-15 10:51:00.167642] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:03.602 [2024-05-15 10:51:00.167706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129895 ] 00:05:03.602 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.602 [2024-05-15 10:51:00.248413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.863 [2024-05-15 10:51:00.346115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.159 10:51:05 -- common/autotest_common.sh@648 -- # local es=0 00:05:09.159 10:51:05 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.159 10:51:05 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:09.159 10:51:05 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.159 10:51:05 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:09.159 10:51:05 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.159 10:51:05 -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:09.159 10:51:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.159 10:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.159 10:51:05 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:09.159 10:51:05 -- common/autotest_common.sh@651 -- # es=1 00:05:09.159 10:51:05 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.159 10:51:05 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.159 10:51:05 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@23 -- # killprocess 129895 00:05:09.159 10:51:05 -- common/autotest_common.sh@946 -- # '[' -z 129895 ']' 00:05:09.159 10:51:05 -- common/autotest_common.sh@950 -- # kill -0 129895 00:05:09.159 10:51:05 -- common/autotest_common.sh@951 -- # uname 00:05:09.159 10:51:05 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:09.159 10:51:05 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 129895 00:05:09.159 10:51:05 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:09.159 10:51:05 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:09.159 10:51:05 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 129895' 00:05:09.159 killing process with pid 129895 00:05:09.159 10:51:05 -- common/autotest_common.sh@965 -- # kill 129895 00:05:09.159 10:51:05 -- common/autotest_common.sh@970 -- # wait 129895 00:05:09.159 00:05:09.159 real 0m5.252s 00:05:09.159 user 0m4.995s 00:05:09.159 sys 0m0.288s 00:05:09.159 10:51:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.159 10:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.159 ************************************ 00:05:09.159 END TEST skip_rpc 00:05:09.159 ************************************ 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:09.159 10:51:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.159 10:51:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.159 10:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.159 ************************************ 00:05:09.159 START TEST skip_rpc_with_json 00:05:09.159 ************************************ 00:05:09.159 10:51:05 -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=131158 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@31 -- # waitforlisten 131158 00:05:09.159 10:51:05 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.159 10:51:05 -- common/autotest_common.sh@827 -- # '[' -z 131158 ']' 00:05:09.159 10:51:05 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.159 10:51:05 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:09.159 10:51:05 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.159 10:51:05 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:09.159 10:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.159 [2024-05-15 10:51:05.501647] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:09.159 [2024-05-15 10:51:05.501704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131158 ] 00:05:09.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.159 [2024-05-15 10:51:05.581285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.159 [2024-05-15 10:51:05.641737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.731 10:51:06 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:09.731 10:51:06 -- common/autotest_common.sh@860 -- # return 0 00:05:09.731 10:51:06 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:09.731 10:51:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.731 10:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.731 [2024-05-15 10:51:06.282001] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:09.731 request: 00:05:09.731 { 00:05:09.731 "trtype": "tcp", 00:05:09.731 "method": "nvmf_get_transports", 00:05:09.731 "req_id": 1 00:05:09.731 } 00:05:09.731 Got JSON-RPC error response 00:05:09.731 response: 00:05:09.731 { 00:05:09.731 "code": -19, 00:05:09.731 "message": "No such device" 00:05:09.731 } 00:05:09.731 10:51:06 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:09.731 10:51:06 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:09.731 10:51:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.731 10:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.731 [2024-05-15 10:51:06.294087] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.731 10:51:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.731 10:51:06 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:09.731 10:51:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.731 10:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.994 10:51:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.994 10:51:06 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:09.994 { 00:05:09.994 "subsystems": [ 00:05:09.994 { 00:05:09.994 "subsystem": "vfio_user_target", 00:05:09.994 "config": null 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "keyring", 00:05:09.994 "config": [] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "iobuf", 00:05:09.994 "config": [ 00:05:09.994 { 00:05:09.994 "method": "iobuf_set_options", 00:05:09.994 "params": { 00:05:09.994 "small_pool_count": 8192, 00:05:09.994 "large_pool_count": 1024, 00:05:09.994 "small_bufsize": 8192, 00:05:09.994 "large_bufsize": 135168 00:05:09.994 } 00:05:09.994 } 00:05:09.994 ] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "sock", 00:05:09.994 "config": [ 00:05:09.994 { 00:05:09.994 "method": "sock_impl_set_options", 00:05:09.994 "params": { 00:05:09.994 "impl_name": "posix", 00:05:09.994 "recv_buf_size": 2097152, 00:05:09.994 "send_buf_size": 2097152, 00:05:09.994 "enable_recv_pipe": true, 00:05:09.994 "enable_quickack": false, 00:05:09.994 "enable_placement_id": 0, 00:05:09.994 "enable_zerocopy_send_server": true, 00:05:09.994 "enable_zerocopy_send_client": false, 00:05:09.994 "zerocopy_threshold": 0, 00:05:09.994 "tls_version": 0, 00:05:09.994 "enable_ktls": false 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "sock_impl_set_options", 00:05:09.994 "params": { 00:05:09.994 "impl_name": "ssl", 00:05:09.994 "recv_buf_size": 4096, 00:05:09.994 "send_buf_size": 4096, 00:05:09.994 "enable_recv_pipe": true, 00:05:09.994 "enable_quickack": false, 00:05:09.994 "enable_placement_id": 0, 00:05:09.994 "enable_zerocopy_send_server": true, 00:05:09.994 "enable_zerocopy_send_client": false, 00:05:09.994 "zerocopy_threshold": 0, 00:05:09.994 "tls_version": 0, 00:05:09.994 "enable_ktls": false 00:05:09.994 } 00:05:09.994 } 00:05:09.994 ] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "vmd", 00:05:09.994 "config": [] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "accel", 00:05:09.994 "config": [ 00:05:09.994 { 00:05:09.994 "method": "accel_set_options", 00:05:09.994 "params": { 00:05:09.994 "small_cache_size": 128, 00:05:09.994 "large_cache_size": 16, 00:05:09.994 "task_count": 2048, 00:05:09.994 "sequence_count": 2048, 00:05:09.994 "buf_count": 2048 00:05:09.994 } 00:05:09.994 } 00:05:09.994 ] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "bdev", 00:05:09.994 "config": [ 00:05:09.994 { 00:05:09.994 "method": "bdev_set_options", 00:05:09.994 "params": { 00:05:09.994 "bdev_io_pool_size": 65535, 00:05:09.994 "bdev_io_cache_size": 256, 00:05:09.994 "bdev_auto_examine": true, 00:05:09.994 "iobuf_small_cache_size": 128, 00:05:09.994 "iobuf_large_cache_size": 16 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "bdev_raid_set_options", 00:05:09.994 "params": { 00:05:09.994 "process_window_size_kb": 1024 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "bdev_iscsi_set_options", 00:05:09.994 "params": { 00:05:09.994 "timeout_sec": 30 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "bdev_nvme_set_options", 00:05:09.994 "params": { 00:05:09.994 "action_on_timeout": "none", 00:05:09.994 "timeout_us": 0, 00:05:09.994 "timeout_admin_us": 0, 00:05:09.994 "keep_alive_timeout_ms": 10000, 00:05:09.994 "arbitration_burst": 0, 00:05:09.994 "low_priority_weight": 0, 00:05:09.994 "medium_priority_weight": 0, 00:05:09.994 "high_priority_weight": 0, 00:05:09.994 "nvme_adminq_poll_period_us": 10000, 00:05:09.994 "nvme_ioq_poll_period_us": 0, 00:05:09.994 "io_queue_requests": 0, 00:05:09.994 "delay_cmd_submit": true, 00:05:09.994 "transport_retry_count": 4, 00:05:09.994 "bdev_retry_count": 3, 00:05:09.994 "transport_ack_timeout": 0, 00:05:09.994 "ctrlr_loss_timeout_sec": 0, 00:05:09.994 "reconnect_delay_sec": 0, 00:05:09.994 "fast_io_fail_timeout_sec": 0, 00:05:09.994 "disable_auto_failback": false, 00:05:09.994 "generate_uuids": false, 00:05:09.994 "transport_tos": 0, 00:05:09.994 "nvme_error_stat": false, 00:05:09.994 "rdma_srq_size": 0, 00:05:09.994 "io_path_stat": false, 00:05:09.994 "allow_accel_sequence": false, 00:05:09.994 "rdma_max_cq_size": 0, 00:05:09.994 "rdma_cm_event_timeout_ms": 0, 00:05:09.994 "dhchap_digests": [ 00:05:09.994 "sha256", 00:05:09.994 "sha384", 00:05:09.994 "sha512" 00:05:09.994 ], 00:05:09.994 "dhchap_dhgroups": [ 00:05:09.994 "null", 00:05:09.994 "ffdhe2048", 00:05:09.994 "ffdhe3072", 00:05:09.994 "ffdhe4096", 00:05:09.994 "ffdhe6144", 00:05:09.994 "ffdhe8192" 00:05:09.994 ] 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "bdev_nvme_set_hotplug", 00:05:09.994 "params": { 00:05:09.994 "period_us": 100000, 00:05:09.994 "enable": false 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "bdev_wait_for_examine" 00:05:09.994 } 00:05:09.994 ] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "scsi", 00:05:09.994 "config": null 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "scheduler", 00:05:09.994 "config": [ 00:05:09.994 { 00:05:09.994 "method": "framework_set_scheduler", 00:05:09.994 "params": { 00:05:09.994 "name": "static" 00:05:09.994 } 00:05:09.994 } 00:05:09.994 ] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "vhost_scsi", 00:05:09.994 "config": [] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "vhost_blk", 00:05:09.994 "config": [] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "ublk", 00:05:09.994 "config": [] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "nbd", 00:05:09.994 "config": [] 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "subsystem": "nvmf", 00:05:09.994 "config": [ 00:05:09.994 { 00:05:09.994 "method": "nvmf_set_config", 00:05:09.994 "params": { 00:05:09.994 "discovery_filter": "match_any", 00:05:09.994 "admin_cmd_passthru": { 00:05:09.994 "identify_ctrlr": false 00:05:09.994 } 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "nvmf_set_max_subsystems", 00:05:09.994 "params": { 00:05:09.994 "max_subsystems": 1024 00:05:09.994 } 00:05:09.994 }, 00:05:09.994 { 00:05:09.994 "method": "nvmf_set_crdt", 00:05:09.995 "params": { 00:05:09.995 "crdt1": 0, 00:05:09.995 "crdt2": 0, 00:05:09.995 "crdt3": 0 00:05:09.995 } 00:05:09.995 }, 00:05:09.995 { 00:05:09.995 "method": "nvmf_create_transport", 00:05:09.995 "params": { 00:05:09.995 "trtype": "TCP", 00:05:09.995 "max_queue_depth": 128, 00:05:09.995 "max_io_qpairs_per_ctrlr": 127, 00:05:09.995 "in_capsule_data_size": 4096, 00:05:09.995 "max_io_size": 131072, 00:05:09.995 "io_unit_size": 131072, 00:05:09.995 "max_aq_depth": 128, 00:05:09.995 "num_shared_buffers": 511, 00:05:09.995 "buf_cache_size": 4294967295, 00:05:09.995 "dif_insert_or_strip": false, 00:05:09.995 "zcopy": false, 00:05:09.995 "c2h_success": true, 00:05:09.995 "sock_priority": 0, 00:05:09.995 "abort_timeout_sec": 1, 00:05:09.995 "ack_timeout": 0, 00:05:09.995 "data_wr_pool_size": 0 00:05:09.995 } 00:05:09.995 } 00:05:09.995 ] 00:05:09.995 }, 00:05:09.995 { 00:05:09.995 "subsystem": "iscsi", 00:05:09.995 "config": [ 00:05:09.995 { 00:05:09.995 "method": "iscsi_set_options", 00:05:09.995 "params": { 00:05:09.995 "node_base": "iqn.2016-06.io.spdk", 00:05:09.995 "max_sessions": 128, 00:05:09.995 "max_connections_per_session": 2, 00:05:09.995 "max_queue_depth": 64, 00:05:09.995 "default_time2wait": 2, 00:05:09.995 "default_time2retain": 20, 00:05:09.995 "first_burst_length": 8192, 00:05:09.995 "immediate_data": true, 00:05:09.995 "allow_duplicated_isid": false, 00:05:09.995 "error_recovery_level": 0, 00:05:09.995 "nop_timeout": 60, 00:05:09.995 "nop_in_interval": 30, 00:05:09.995 "disable_chap": false, 00:05:09.995 "require_chap": false, 00:05:09.995 "mutual_chap": false, 00:05:09.995 "chap_group": 0, 00:05:09.995 "max_large_datain_per_connection": 64, 00:05:09.995 "max_r2t_per_connection": 4, 00:05:09.995 "pdu_pool_size": 36864, 00:05:09.995 "immediate_data_pool_size": 16384, 00:05:09.995 "data_out_pool_size": 2048 00:05:09.995 } 00:05:09.995 } 00:05:09.995 ] 00:05:09.995 } 00:05:09.995 ] 00:05:09.995 } 00:05:09.995 10:51:06 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:09.995 10:51:06 -- rpc/skip_rpc.sh@40 -- # killprocess 131158 00:05:09.995 10:51:06 -- common/autotest_common.sh@946 -- # '[' -z 131158 ']' 00:05:09.995 10:51:06 -- common/autotest_common.sh@950 -- # kill -0 131158 00:05:09.995 10:51:06 -- common/autotest_common.sh@951 -- # uname 00:05:09.995 10:51:06 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:09.995 10:51:06 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131158 00:05:09.995 10:51:06 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:09.995 10:51:06 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:09.995 10:51:06 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131158' 00:05:09.995 killing process with pid 131158 00:05:09.995 10:51:06 -- common/autotest_common.sh@965 -- # kill 131158 00:05:09.995 10:51:06 -- common/autotest_common.sh@970 -- # wait 131158 00:05:10.257 10:51:06 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=131298 00:05:10.257 10:51:06 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:10.257 10:51:06 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.553 10:51:11 -- rpc/skip_rpc.sh@50 -- # killprocess 131298 00:05:15.553 10:51:11 -- common/autotest_common.sh@946 -- # '[' -z 131298 ']' 00:05:15.553 10:51:11 -- common/autotest_common.sh@950 -- # kill -0 131298 00:05:15.553 10:51:11 -- common/autotest_common.sh@951 -- # uname 00:05:15.553 10:51:11 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.553 10:51:11 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131298 00:05:15.553 10:51:11 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.553 10:51:11 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.553 10:51:11 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131298' 00:05:15.553 killing process with pid 131298 00:05:15.553 10:51:11 -- common/autotest_common.sh@965 -- # kill 131298 00:05:15.553 10:51:11 -- common/autotest_common.sh@970 -- # wait 131298 00:05:15.553 10:51:11 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:15.553 10:51:11 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:15.553 00:05:15.553 real 0m6.510s 00:05:15.553 user 0m6.381s 00:05:15.553 sys 0m0.542s 00:05:15.553 10:51:11 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.553 10:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:15.553 ************************************ 00:05:15.553 END TEST skip_rpc_with_json 00:05:15.553 ************************************ 00:05:15.554 10:51:11 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:15.554 10:51:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.554 10:51:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.554 10:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 ************************************ 00:05:15.554 START TEST skip_rpc_with_delay 00:05:15.554 ************************************ 00:05:15.554 10:51:12 -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:15.554 10:51:12 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.554 10:51:12 -- common/autotest_common.sh@648 -- # local es=0 00:05:15.554 10:51:12 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.554 10:51:12 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.554 10:51:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.554 10:51:12 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.554 10:51:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.554 10:51:12 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.554 10:51:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.554 10:51:12 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.554 10:51:12 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:15.554 10:51:12 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.554 [2024-05-15 10:51:12.099056] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:15.554 [2024-05-15 10:51:12.099130] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:15.554 10:51:12 -- common/autotest_common.sh@651 -- # es=1 00:05:15.554 10:51:12 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.554 10:51:12 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.554 10:51:12 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.554 00:05:15.554 real 0m0.073s 00:05:15.554 user 0m0.045s 00:05:15.554 sys 0m0.028s 00:05:15.554 10:51:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.554 10:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 ************************************ 00:05:15.554 END TEST skip_rpc_with_delay 00:05:15.554 ************************************ 00:05:15.554 10:51:12 -- rpc/skip_rpc.sh@77 -- # uname 00:05:15.554 10:51:12 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:15.554 10:51:12 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:15.554 10:51:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.554 10:51:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.554 10:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 ************************************ 00:05:15.554 START TEST exit_on_failed_rpc_init 00:05:15.554 ************************************ 00:05:15.554 10:51:12 -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:15.554 10:51:12 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=133026 00:05:15.554 10:51:12 -- rpc/skip_rpc.sh@63 -- # waitforlisten 133026 00:05:15.554 10:51:12 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.554 10:51:12 -- common/autotest_common.sh@827 -- # '[' -z 133026 ']' 00:05:15.554 10:51:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.554 10:51:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.554 10:51:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.554 10:51:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.554 10:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:15.816 [2024-05-15 10:51:12.249232] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:15.816 [2024-05-15 10:51:12.249287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133026 ] 00:05:15.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.816 [2024-05-15 10:51:12.325649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.816 [2024-05-15 10:51:12.386001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.389 10:51:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.389 10:51:13 -- common/autotest_common.sh@860 -- # return 0 00:05:16.389 10:51:13 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.389 10:51:13 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.389 10:51:13 -- common/autotest_common.sh@648 -- # local es=0 00:05:16.389 10:51:13 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.389 10:51:13 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.389 10:51:13 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.389 10:51:13 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.389 10:51:13 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.389 10:51:13 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.389 10:51:13 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.389 10:51:13 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.389 10:51:13 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:16.389 10:51:13 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.650 [2024-05-15 10:51:13.067960] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:16.650 [2024-05-15 10:51:13.068014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133039 ] 00:05:16.650 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.650 [2024-05-15 10:51:13.142679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.650 [2024-05-15 10:51:13.206350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.650 [2024-05-15 10:51:13.206409] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.650 [2024-05-15 10:51:13.206418] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.650 [2024-05-15 10:51:13.206424] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.650 10:51:13 -- common/autotest_common.sh@651 -- # es=234 00:05:16.650 10:51:13 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:16.650 10:51:13 -- common/autotest_common.sh@660 -- # es=106 00:05:16.650 10:51:13 -- common/autotest_common.sh@661 -- # case "$es" in 00:05:16.650 10:51:13 -- common/autotest_common.sh@668 -- # es=1 00:05:16.650 10:51:13 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:16.650 10:51:13 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.650 10:51:13 -- rpc/skip_rpc.sh@70 -- # killprocess 133026 00:05:16.650 10:51:13 -- common/autotest_common.sh@946 -- # '[' -z 133026 ']' 00:05:16.650 10:51:13 -- common/autotest_common.sh@950 -- # kill -0 133026 00:05:16.650 10:51:13 -- common/autotest_common.sh@951 -- # uname 00:05:16.650 10:51:13 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.650 10:51:13 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133026 00:05:16.912 10:51:13 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.912 10:51:13 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.912 10:51:13 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133026' 00:05:16.912 killing process with pid 133026 00:05:16.912 10:51:13 -- common/autotest_common.sh@965 -- # kill 133026 00:05:16.912 10:51:13 -- common/autotest_common.sh@970 -- # wait 133026 00:05:16.912 00:05:16.912 real 0m1.309s 00:05:16.912 user 0m1.539s 00:05:16.912 sys 0m0.365s 00:05:16.912 10:51:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.912 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.912 ************************************ 00:05:16.912 END TEST exit_on_failed_rpc_init 00:05:16.912 ************************************ 00:05:16.912 10:51:13 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.912 00:05:16.912 real 0m13.587s 00:05:16.912 user 0m13.115s 00:05:16.912 sys 0m1.522s 00:05:16.912 10:51:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.912 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.912 ************************************ 00:05:16.912 END TEST skip_rpc 00:05:16.912 ************************************ 00:05:17.174 10:51:13 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:17.174 10:51:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.174 10:51:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.174 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:17.174 ************************************ 00:05:17.174 START TEST rpc_client 00:05:17.174 ************************************ 00:05:17.174 10:51:13 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:17.174 * Looking for test storage... 00:05:17.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:17.174 10:51:13 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:17.174 OK 00:05:17.174 10:51:13 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:17.174 00:05:17.174 real 0m0.130s 00:05:17.174 user 0m0.053s 00:05:17.174 sys 0m0.086s 00:05:17.174 10:51:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.174 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:17.174 ************************************ 00:05:17.174 END TEST rpc_client 00:05:17.174 ************************************ 00:05:17.174 10:51:13 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.174 10:51:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.174 10:51:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.174 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:17.436 ************************************ 00:05:17.436 START TEST json_config 00:05:17.436 ************************************ 00:05:17.436 10:51:13 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.436 10:51:13 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.436 10:51:13 -- nvmf/common.sh@7 -- # uname -s 00:05:17.436 10:51:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.436 10:51:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.436 10:51:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.436 10:51:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.436 10:51:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.436 10:51:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.436 10:51:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.436 10:51:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.436 10:51:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.436 10:51:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.436 10:51:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.436 10:51:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.436 10:51:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.436 10:51:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.436 10:51:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.436 10:51:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.436 10:51:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.436 10:51:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.436 10:51:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.436 10:51:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.436 10:51:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.436 10:51:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.436 10:51:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.436 10:51:13 -- paths/export.sh@5 -- # export PATH 00:05:17.436 10:51:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.436 10:51:13 -- nvmf/common.sh@47 -- # : 0 00:05:17.436 10:51:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:17.436 10:51:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:17.436 10:51:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.436 10:51:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.436 10:51:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.436 10:51:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:17.436 10:51:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:17.436 10:51:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:17.436 10:51:13 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:17.436 10:51:13 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:17.436 10:51:13 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:17.436 10:51:13 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:17.436 10:51:13 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:17.436 10:51:13 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:17.436 10:51:13 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:17.436 10:51:13 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:17.436 10:51:13 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:17.436 10:51:13 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:17.436 10:51:13 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:17.436 10:51:13 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:17.436 10:51:13 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:17.436 10:51:13 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:17.436 10:51:13 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.436 10:51:13 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:17.436 INFO: JSON configuration test init 00:05:17.436 10:51:13 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:17.436 10:51:13 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:17.436 10:51:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:17.436 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:17.436 10:51:13 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:17.436 10:51:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:17.436 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:17.436 10:51:13 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:17.436 10:51:13 -- json_config/common.sh@9 -- # local app=target 00:05:17.436 10:51:13 -- json_config/common.sh@10 -- # shift 00:05:17.436 10:51:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.436 10:51:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.436 10:51:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.436 10:51:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.436 10:51:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.436 10:51:13 -- json_config/common.sh@22 -- # app_pid["$app"]=133471 00:05:17.436 10:51:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.436 Waiting for target to run... 00:05:17.436 10:51:13 -- json_config/common.sh@25 -- # waitforlisten 133471 /var/tmp/spdk_tgt.sock 00:05:17.436 10:51:13 -- common/autotest_common.sh@827 -- # '[' -z 133471 ']' 00:05:17.436 10:51:13 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:17.436 10:51:13 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.437 10:51:13 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.437 10:51:13 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.437 10:51:13 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.437 10:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:17.437 [2024-05-15 10:51:14.012922] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:17.437 [2024-05-15 10:51:14.012971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133471 ] 00:05:17.437 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.697 [2024-05-15 10:51:14.323690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.958 [2024-05-15 10:51:14.367893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.219 10:51:14 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.219 10:51:14 -- common/autotest_common.sh@860 -- # return 0 00:05:18.219 10:51:14 -- json_config/common.sh@26 -- # echo '' 00:05:18.219 00:05:18.219 10:51:14 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:18.219 10:51:14 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:18.219 10:51:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:18.219 10:51:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.219 10:51:14 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:18.219 10:51:14 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:18.219 10:51:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.219 10:51:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.219 10:51:14 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:18.219 10:51:14 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:18.219 10:51:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:18.791 10:51:15 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:18.791 10:51:15 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:18.791 10:51:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:18.791 10:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:18.791 10:51:15 -- json_config/json_config.sh@45 -- # local ret=0 00:05:18.791 10:51:15 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:18.791 10:51:15 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:18.791 10:51:15 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:18.791 10:51:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:18.791 10:51:15 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:19.051 10:51:15 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:19.051 10:51:15 -- json_config/json_config.sh@48 -- # local get_types 00:05:19.051 10:51:15 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:19.051 10:51:15 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:19.051 10:51:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.051 10:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:19.051 10:51:15 -- json_config/json_config.sh@55 -- # return 0 00:05:19.051 10:51:15 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:19.051 10:51:15 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:19.051 10:51:15 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:19.051 10:51:15 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:19.051 10:51:15 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:19.051 10:51:15 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:19.051 10:51:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.051 10:51:15 -- common/autotest_common.sh@10 -- # set +x 00:05:19.051 10:51:15 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:19.051 10:51:15 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:19.051 10:51:15 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:19.051 10:51:15 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.051 10:51:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.321 MallocForNvmf0 00:05:19.321 10:51:15 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.321 10:51:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.321 MallocForNvmf1 00:05:19.321 10:51:15 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.321 10:51:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.583 [2024-05-15 10:51:16.018697] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.583 10:51:16 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.583 10:51:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.583 10:51:16 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.583 10:51:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.844 10:51:16 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.844 10:51:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.844 10:51:16 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.844 10:51:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.104 [2024-05-15 10:51:16.608184] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:20.104 [2024-05-15 10:51:16.608526] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.104 10:51:16 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:20.104 10:51:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.104 10:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.104 10:51:16 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:20.104 10:51:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.104 10:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.104 10:51:16 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:20.104 10:51:16 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.104 10:51:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.365 MallocBdevForConfigChangeCheck 00:05:20.365 10:51:16 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:20.365 10:51:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.365 10:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.365 10:51:16 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:20.365 10:51:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.626 10:51:17 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:20.626 INFO: shutting down applications... 00:05:20.626 10:51:17 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:20.626 10:51:17 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:20.626 10:51:17 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:20.626 10:51:17 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:21.198 Calling clear_iscsi_subsystem 00:05:21.198 Calling clear_nvmf_subsystem 00:05:21.198 Calling clear_nbd_subsystem 00:05:21.198 Calling clear_ublk_subsystem 00:05:21.198 Calling clear_vhost_blk_subsystem 00:05:21.198 Calling clear_vhost_scsi_subsystem 00:05:21.198 Calling clear_bdev_subsystem 00:05:21.198 10:51:17 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:21.198 10:51:17 -- json_config/json_config.sh@343 -- # count=100 00:05:21.198 10:51:17 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:21.198 10:51:17 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.198 10:51:17 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:21.198 10:51:17 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.458 10:51:17 -- json_config/json_config.sh@345 -- # break 00:05:21.458 10:51:17 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:21.458 10:51:17 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:21.458 10:51:17 -- json_config/common.sh@31 -- # local app=target 00:05:21.458 10:51:17 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.458 10:51:17 -- json_config/common.sh@35 -- # [[ -n 133471 ]] 00:05:21.458 10:51:17 -- json_config/common.sh@38 -- # kill -SIGINT 133471 00:05:21.458 [2024-05-15 10:51:17.963334] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:21.458 10:51:17 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.458 10:51:17 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.458 10:51:17 -- json_config/common.sh@41 -- # kill -0 133471 00:05:21.458 10:51:17 -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.031 10:51:18 -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.031 10:51:18 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.031 10:51:18 -- json_config/common.sh@41 -- # kill -0 133471 00:05:22.031 10:51:18 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.031 10:51:18 -- json_config/common.sh@43 -- # break 00:05:22.031 10:51:18 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.031 10:51:18 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.031 SPDK target shutdown done 00:05:22.031 10:51:18 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:22.031 INFO: relaunching applications... 00:05:22.031 10:51:18 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.031 10:51:18 -- json_config/common.sh@9 -- # local app=target 00:05:22.031 10:51:18 -- json_config/common.sh@10 -- # shift 00:05:22.031 10:51:18 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.031 10:51:18 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.031 10:51:18 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.031 10:51:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.031 10:51:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.031 10:51:18 -- json_config/common.sh@22 -- # app_pid["$app"]=134459 00:05:22.031 10:51:18 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.031 Waiting for target to run... 00:05:22.031 10:51:18 -- json_config/common.sh@25 -- # waitforlisten 134459 /var/tmp/spdk_tgt.sock 00:05:22.031 10:51:18 -- common/autotest_common.sh@827 -- # '[' -z 134459 ']' 00:05:22.031 10:51:18 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.031 10:51:18 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.031 10:51:18 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.031 10:51:18 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.032 10:51:18 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.032 10:51:18 -- common/autotest_common.sh@10 -- # set +x 00:05:22.032 [2024-05-15 10:51:18.528595] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:22.032 [2024-05-15 10:51:18.528669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134459 ] 00:05:22.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.293 [2024-05-15 10:51:18.817887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.293 [2024-05-15 10:51:18.861278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.865 [2024-05-15 10:51:19.337516] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.865 [2024-05-15 10:51:19.369522] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:22.865 [2024-05-15 10:51:19.369865] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.865 10:51:19 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.865 10:51:19 -- common/autotest_common.sh@860 -- # return 0 00:05:22.865 10:51:19 -- json_config/common.sh@26 -- # echo '' 00:05:22.865 00:05:22.865 10:51:19 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:22.865 10:51:19 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.865 INFO: Checking if target configuration is the same... 00:05:22.865 10:51:19 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.865 10:51:19 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:22.865 10:51:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.865 + '[' 2 -ne 2 ']' 00:05:22.865 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.865 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:22.865 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:22.865 +++ basename /dev/fd/62 00:05:22.865 ++ mktemp /tmp/62.XXX 00:05:22.865 + tmp_file_1=/tmp/62.l8i 00:05:22.865 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.865 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.865 + tmp_file_2=/tmp/spdk_tgt_config.json.jFP 00:05:22.865 + ret=0 00:05:22.865 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.126 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.126 + diff -u /tmp/62.l8i /tmp/spdk_tgt_config.json.jFP 00:05:23.126 + echo 'INFO: JSON config files are the same' 00:05:23.126 INFO: JSON config files are the same 00:05:23.126 + rm /tmp/62.l8i /tmp/spdk_tgt_config.json.jFP 00:05:23.126 + exit 0 00:05:23.126 10:51:19 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:23.126 10:51:19 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.126 INFO: changing configuration and checking if this can be detected... 00:05:23.126 10:51:19 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.126 10:51:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.387 10:51:19 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.387 10:51:19 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:23.387 10:51:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.387 + '[' 2 -ne 2 ']' 00:05:23.387 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.387 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.387 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.387 +++ basename /dev/fd/62 00:05:23.387 ++ mktemp /tmp/62.XXX 00:05:23.387 + tmp_file_1=/tmp/62.HkC 00:05:23.387 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.387 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.387 + tmp_file_2=/tmp/spdk_tgt_config.json.4r7 00:05:23.387 + ret=0 00:05:23.387 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.649 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.649 + diff -u /tmp/62.HkC /tmp/spdk_tgt_config.json.4r7 00:05:23.649 + ret=1 00:05:23.649 + echo '=== Start of file: /tmp/62.HkC ===' 00:05:23.649 + cat /tmp/62.HkC 00:05:23.649 + echo '=== End of file: /tmp/62.HkC ===' 00:05:23.649 + echo '' 00:05:23.649 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4r7 ===' 00:05:23.649 + cat /tmp/spdk_tgt_config.json.4r7 00:05:23.649 + echo '=== End of file: /tmp/spdk_tgt_config.json.4r7 ===' 00:05:23.649 + echo '' 00:05:23.649 + rm /tmp/62.HkC /tmp/spdk_tgt_config.json.4r7 00:05:23.649 + exit 1 00:05:23.649 10:51:20 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:23.649 INFO: configuration change detected. 00:05:23.649 10:51:20 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:23.649 10:51:20 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:23.649 10:51:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.649 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.649 10:51:20 -- json_config/json_config.sh@307 -- # local ret=0 00:05:23.649 10:51:20 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:23.649 10:51:20 -- json_config/json_config.sh@317 -- # [[ -n 134459 ]] 00:05:23.649 10:51:20 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:23.649 10:51:20 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:23.649 10:51:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.649 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.910 10:51:20 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:23.910 10:51:20 -- json_config/json_config.sh@193 -- # uname -s 00:05:23.910 10:51:20 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:23.910 10:51:20 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:23.910 10:51:20 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:23.910 10:51:20 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:23.910 10:51:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.910 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.910 10:51:20 -- json_config/json_config.sh@323 -- # killprocess 134459 00:05:23.910 10:51:20 -- common/autotest_common.sh@946 -- # '[' -z 134459 ']' 00:05:23.910 10:51:20 -- common/autotest_common.sh@950 -- # kill -0 134459 00:05:23.910 10:51:20 -- common/autotest_common.sh@951 -- # uname 00:05:23.910 10:51:20 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.910 10:51:20 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134459 00:05:23.910 10:51:20 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.910 10:51:20 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.910 10:51:20 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134459' 00:05:23.910 killing process with pid 134459 00:05:23.910 10:51:20 -- common/autotest_common.sh@965 -- # kill 134459 00:05:23.910 [2024-05-15 10:51:20.408887] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:23.910 10:51:20 -- common/autotest_common.sh@970 -- # wait 134459 00:05:24.172 10:51:20 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.172 10:51:20 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:24.172 10:51:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.172 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:24.172 10:51:20 -- json_config/json_config.sh@328 -- # return 0 00:05:24.172 10:51:20 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:24.172 INFO: Success 00:05:24.172 00:05:24.172 real 0m6.876s 00:05:24.172 user 0m8.352s 00:05:24.172 sys 0m1.664s 00:05:24.172 10:51:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.172 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:24.172 ************************************ 00:05:24.172 END TEST json_config 00:05:24.172 ************************************ 00:05:24.172 10:51:20 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.172 10:51:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.172 10:51:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.172 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:24.172 ************************************ 00:05:24.172 START TEST json_config_extra_key 00:05:24.172 ************************************ 00:05:24.172 10:51:20 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.434 10:51:20 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.434 10:51:20 -- nvmf/common.sh@7 -- # uname -s 00:05:24.434 10:51:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.434 10:51:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.434 10:51:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.434 10:51:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.434 10:51:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.434 10:51:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.435 10:51:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.435 10:51:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.435 10:51:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.435 10:51:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.435 10:51:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.435 10:51:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.435 10:51:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.435 10:51:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.435 10:51:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.435 10:51:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.435 10:51:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.435 10:51:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.435 10:51:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.435 10:51:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.435 10:51:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.435 10:51:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.435 10:51:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.435 10:51:20 -- paths/export.sh@5 -- # export PATH 00:05:24.435 10:51:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.435 10:51:20 -- nvmf/common.sh@47 -- # : 0 00:05:24.435 10:51:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.435 10:51:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.435 10:51:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.435 10:51:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.435 10:51:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.435 10:51:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.435 10:51:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.435 10:51:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.435 INFO: launching applications... 00:05:24.435 10:51:20 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.435 10:51:20 -- json_config/common.sh@9 -- # local app=target 00:05:24.435 10:51:20 -- json_config/common.sh@10 -- # shift 00:05:24.435 10:51:20 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.435 10:51:20 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.435 10:51:20 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.435 10:51:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.435 10:51:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.435 10:51:20 -- json_config/common.sh@22 -- # app_pid["$app"]=135062 00:05:24.435 10:51:20 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.435 Waiting for target to run... 00:05:24.435 10:51:20 -- json_config/common.sh@25 -- # waitforlisten 135062 /var/tmp/spdk_tgt.sock 00:05:24.435 10:51:20 -- common/autotest_common.sh@827 -- # '[' -z 135062 ']' 00:05:24.435 10:51:20 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.435 10:51:20 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.435 10:51:20 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.435 10:51:20 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.435 10:51:20 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.435 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:24.435 [2024-05-15 10:51:20.969855] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:24.435 [2024-05-15 10:51:20.969932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135062 ] 00:05:24.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.697 [2024-05-15 10:51:21.252116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.697 [2024-05-15 10:51:21.303356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.269 10:51:21 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.269 10:51:21 -- common/autotest_common.sh@860 -- # return 0 00:05:25.269 10:51:21 -- json_config/common.sh@26 -- # echo '' 00:05:25.269 00:05:25.269 10:51:21 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.269 INFO: shutting down applications... 00:05:25.269 10:51:21 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.269 10:51:21 -- json_config/common.sh@31 -- # local app=target 00:05:25.269 10:51:21 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.269 10:51:21 -- json_config/common.sh@35 -- # [[ -n 135062 ]] 00:05:25.269 10:51:21 -- json_config/common.sh@38 -- # kill -SIGINT 135062 00:05:25.269 10:51:21 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.269 10:51:21 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.269 10:51:21 -- json_config/common.sh@41 -- # kill -0 135062 00:05:25.269 10:51:21 -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.844 10:51:22 -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.844 10:51:22 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.844 10:51:22 -- json_config/common.sh@41 -- # kill -0 135062 00:05:25.844 10:51:22 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.844 10:51:22 -- json_config/common.sh@43 -- # break 00:05:25.844 10:51:22 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.844 10:51:22 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.844 SPDK target shutdown done 00:05:25.844 10:51:22 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:25.844 Success 00:05:25.844 00:05:25.844 real 0m1.428s 00:05:25.844 user 0m1.035s 00:05:25.844 sys 0m0.380s 00:05:25.844 10:51:22 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.844 10:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.844 ************************************ 00:05:25.844 END TEST json_config_extra_key 00:05:25.844 ************************************ 00:05:25.844 10:51:22 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.844 10:51:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.844 10:51:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.844 10:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.844 ************************************ 00:05:25.844 START TEST alias_rpc 00:05:25.844 ************************************ 00:05:25.844 10:51:22 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.844 * Looking for test storage... 00:05:25.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:25.844 10:51:22 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.844 10:51:22 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=135448 00:05:25.844 10:51:22 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 135448 00:05:25.844 10:51:22 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.844 10:51:22 -- common/autotest_common.sh@827 -- # '[' -z 135448 ']' 00:05:25.844 10:51:22 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.844 10:51:22 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.844 10:51:22 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.844 10:51:22 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.844 10:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.844 [2024-05-15 10:51:22.474793] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:25.844 [2024-05-15 10:51:22.474845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135448 ] 00:05:26.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.105 [2024-05-15 10:51:22.551885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.105 [2024-05-15 10:51:22.608451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.678 10:51:23 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.678 10:51:23 -- common/autotest_common.sh@860 -- # return 0 00:05:26.678 10:51:23 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:26.939 10:51:23 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 135448 00:05:26.939 10:51:23 -- common/autotest_common.sh@946 -- # '[' -z 135448 ']' 00:05:26.939 10:51:23 -- common/autotest_common.sh@950 -- # kill -0 135448 00:05:26.939 10:51:23 -- common/autotest_common.sh@951 -- # uname 00:05:26.939 10:51:23 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.939 10:51:23 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135448 00:05:26.939 10:51:23 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:26.939 10:51:23 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:26.939 10:51:23 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135448' 00:05:26.939 killing process with pid 135448 00:05:26.939 10:51:23 -- common/autotest_common.sh@965 -- # kill 135448 00:05:26.939 10:51:23 -- common/autotest_common.sh@970 -- # wait 135448 00:05:27.201 00:05:27.201 real 0m1.339s 00:05:27.201 user 0m1.488s 00:05:27.201 sys 0m0.359s 00:05:27.201 10:51:23 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.201 10:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:27.201 ************************************ 00:05:27.201 END TEST alias_rpc 00:05:27.201 ************************************ 00:05:27.201 10:51:23 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:27.201 10:51:23 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.201 10:51:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.201 10:51:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.201 10:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:27.201 ************************************ 00:05:27.201 START TEST spdkcli_tcp 00:05:27.201 ************************************ 00:05:27.201 10:51:23 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.201 * Looking for test storage... 00:05:27.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:27.201 10:51:23 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:27.201 10:51:23 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:27.201 10:51:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.201 10:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=135833 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@27 -- # waitforlisten 135833 00:05:27.201 10:51:23 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:27.201 10:51:23 -- common/autotest_common.sh@827 -- # '[' -z 135833 ']' 00:05:27.201 10:51:23 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.201 10:51:23 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.201 10:51:23 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.201 10:51:23 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.201 10:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:27.464 [2024-05-15 10:51:23.891061] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:27.464 [2024-05-15 10:51:23.891114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135833 ] 00:05:27.464 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.464 [2024-05-15 10:51:23.965862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.464 [2024-05-15 10:51:24.021460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.464 [2024-05-15 10:51:24.021460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.036 10:51:24 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.036 10:51:24 -- common/autotest_common.sh@860 -- # return 0 00:05:28.036 10:51:24 -- spdkcli/tcp.sh@31 -- # socat_pid=135852 00:05:28.036 10:51:24 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.036 10:51:24 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:28.299 [ 00:05:28.299 "bdev_malloc_delete", 00:05:28.299 "bdev_malloc_create", 00:05:28.299 "bdev_null_resize", 00:05:28.299 "bdev_null_delete", 00:05:28.299 "bdev_null_create", 00:05:28.299 "bdev_nvme_cuse_unregister", 00:05:28.299 "bdev_nvme_cuse_register", 00:05:28.299 "bdev_opal_new_user", 00:05:28.299 "bdev_opal_set_lock_state", 00:05:28.299 "bdev_opal_delete", 00:05:28.299 "bdev_opal_get_info", 00:05:28.299 "bdev_opal_create", 00:05:28.299 "bdev_nvme_opal_revert", 00:05:28.299 "bdev_nvme_opal_init", 00:05:28.299 "bdev_nvme_send_cmd", 00:05:28.299 "bdev_nvme_get_path_iostat", 00:05:28.299 "bdev_nvme_get_mdns_discovery_info", 00:05:28.299 "bdev_nvme_stop_mdns_discovery", 00:05:28.299 "bdev_nvme_start_mdns_discovery", 00:05:28.299 "bdev_nvme_set_multipath_policy", 00:05:28.299 "bdev_nvme_set_preferred_path", 00:05:28.299 "bdev_nvme_get_io_paths", 00:05:28.299 "bdev_nvme_remove_error_injection", 00:05:28.299 "bdev_nvme_add_error_injection", 00:05:28.299 "bdev_nvme_get_discovery_info", 00:05:28.299 "bdev_nvme_stop_discovery", 00:05:28.299 "bdev_nvme_start_discovery", 00:05:28.299 "bdev_nvme_get_controller_health_info", 00:05:28.299 "bdev_nvme_disable_controller", 00:05:28.299 "bdev_nvme_enable_controller", 00:05:28.299 "bdev_nvme_reset_controller", 00:05:28.299 "bdev_nvme_get_transport_statistics", 00:05:28.299 "bdev_nvme_apply_firmware", 00:05:28.299 "bdev_nvme_detach_controller", 00:05:28.299 "bdev_nvme_get_controllers", 00:05:28.299 "bdev_nvme_attach_controller", 00:05:28.299 "bdev_nvme_set_hotplug", 00:05:28.299 "bdev_nvme_set_options", 00:05:28.299 "bdev_passthru_delete", 00:05:28.299 "bdev_passthru_create", 00:05:28.299 "bdev_lvol_grow_lvstore", 00:05:28.299 "bdev_lvol_get_lvols", 00:05:28.299 "bdev_lvol_get_lvstores", 00:05:28.299 "bdev_lvol_delete", 00:05:28.299 "bdev_lvol_set_read_only", 00:05:28.299 "bdev_lvol_resize", 00:05:28.299 "bdev_lvol_decouple_parent", 00:05:28.299 "bdev_lvol_inflate", 00:05:28.299 "bdev_lvol_rename", 00:05:28.299 "bdev_lvol_clone_bdev", 00:05:28.299 "bdev_lvol_clone", 00:05:28.299 "bdev_lvol_snapshot", 00:05:28.299 "bdev_lvol_create", 00:05:28.299 "bdev_lvol_delete_lvstore", 00:05:28.299 "bdev_lvol_rename_lvstore", 00:05:28.299 "bdev_lvol_create_lvstore", 00:05:28.299 "bdev_raid_set_options", 00:05:28.299 "bdev_raid_remove_base_bdev", 00:05:28.299 "bdev_raid_add_base_bdev", 00:05:28.299 "bdev_raid_delete", 00:05:28.299 "bdev_raid_create", 00:05:28.299 "bdev_raid_get_bdevs", 00:05:28.299 "bdev_error_inject_error", 00:05:28.299 "bdev_error_delete", 00:05:28.299 "bdev_error_create", 00:05:28.299 "bdev_split_delete", 00:05:28.299 "bdev_split_create", 00:05:28.299 "bdev_delay_delete", 00:05:28.299 "bdev_delay_create", 00:05:28.299 "bdev_delay_update_latency", 00:05:28.299 "bdev_zone_block_delete", 00:05:28.299 "bdev_zone_block_create", 00:05:28.299 "blobfs_create", 00:05:28.299 "blobfs_detect", 00:05:28.299 "blobfs_set_cache_size", 00:05:28.299 "bdev_aio_delete", 00:05:28.299 "bdev_aio_rescan", 00:05:28.299 "bdev_aio_create", 00:05:28.299 "bdev_ftl_set_property", 00:05:28.299 "bdev_ftl_get_properties", 00:05:28.299 "bdev_ftl_get_stats", 00:05:28.299 "bdev_ftl_unmap", 00:05:28.299 "bdev_ftl_unload", 00:05:28.299 "bdev_ftl_delete", 00:05:28.299 "bdev_ftl_load", 00:05:28.299 "bdev_ftl_create", 00:05:28.299 "bdev_virtio_attach_controller", 00:05:28.299 "bdev_virtio_scsi_get_devices", 00:05:28.299 "bdev_virtio_detach_controller", 00:05:28.299 "bdev_virtio_blk_set_hotplug", 00:05:28.299 "bdev_iscsi_delete", 00:05:28.299 "bdev_iscsi_create", 00:05:28.299 "bdev_iscsi_set_options", 00:05:28.299 "accel_error_inject_error", 00:05:28.299 "ioat_scan_accel_module", 00:05:28.299 "dsa_scan_accel_module", 00:05:28.299 "iaa_scan_accel_module", 00:05:28.299 "vfu_virtio_create_scsi_endpoint", 00:05:28.299 "vfu_virtio_scsi_remove_target", 00:05:28.299 "vfu_virtio_scsi_add_target", 00:05:28.299 "vfu_virtio_create_blk_endpoint", 00:05:28.299 "vfu_virtio_delete_endpoint", 00:05:28.299 "keyring_file_remove_key", 00:05:28.299 "keyring_file_add_key", 00:05:28.299 "iscsi_get_histogram", 00:05:28.299 "iscsi_enable_histogram", 00:05:28.299 "iscsi_set_options", 00:05:28.299 "iscsi_get_auth_groups", 00:05:28.299 "iscsi_auth_group_remove_secret", 00:05:28.299 "iscsi_auth_group_add_secret", 00:05:28.299 "iscsi_delete_auth_group", 00:05:28.299 "iscsi_create_auth_group", 00:05:28.299 "iscsi_set_discovery_auth", 00:05:28.299 "iscsi_get_options", 00:05:28.299 "iscsi_target_node_request_logout", 00:05:28.299 "iscsi_target_node_set_redirect", 00:05:28.299 "iscsi_target_node_set_auth", 00:05:28.299 "iscsi_target_node_add_lun", 00:05:28.299 "iscsi_get_stats", 00:05:28.299 "iscsi_get_connections", 00:05:28.299 "iscsi_portal_group_set_auth", 00:05:28.299 "iscsi_start_portal_group", 00:05:28.299 "iscsi_delete_portal_group", 00:05:28.299 "iscsi_create_portal_group", 00:05:28.299 "iscsi_get_portal_groups", 00:05:28.299 "iscsi_delete_target_node", 00:05:28.299 "iscsi_target_node_remove_pg_ig_maps", 00:05:28.299 "iscsi_target_node_add_pg_ig_maps", 00:05:28.299 "iscsi_create_target_node", 00:05:28.299 "iscsi_get_target_nodes", 00:05:28.299 "iscsi_delete_initiator_group", 00:05:28.299 "iscsi_initiator_group_remove_initiators", 00:05:28.299 "iscsi_initiator_group_add_initiators", 00:05:28.299 "iscsi_create_initiator_group", 00:05:28.299 "iscsi_get_initiator_groups", 00:05:28.299 "nvmf_set_crdt", 00:05:28.299 "nvmf_set_config", 00:05:28.299 "nvmf_set_max_subsystems", 00:05:28.299 "nvmf_subsystem_get_listeners", 00:05:28.299 "nvmf_subsystem_get_qpairs", 00:05:28.299 "nvmf_subsystem_get_controllers", 00:05:28.299 "nvmf_get_stats", 00:05:28.299 "nvmf_get_transports", 00:05:28.299 "nvmf_create_transport", 00:05:28.299 "nvmf_get_targets", 00:05:28.299 "nvmf_delete_target", 00:05:28.299 "nvmf_create_target", 00:05:28.299 "nvmf_subsystem_allow_any_host", 00:05:28.299 "nvmf_subsystem_remove_host", 00:05:28.299 "nvmf_subsystem_add_host", 00:05:28.299 "nvmf_ns_remove_host", 00:05:28.299 "nvmf_ns_add_host", 00:05:28.299 "nvmf_subsystem_remove_ns", 00:05:28.299 "nvmf_subsystem_add_ns", 00:05:28.299 "nvmf_subsystem_listener_set_ana_state", 00:05:28.299 "nvmf_discovery_get_referrals", 00:05:28.299 "nvmf_discovery_remove_referral", 00:05:28.299 "nvmf_discovery_add_referral", 00:05:28.299 "nvmf_subsystem_remove_listener", 00:05:28.299 "nvmf_subsystem_add_listener", 00:05:28.299 "nvmf_delete_subsystem", 00:05:28.299 "nvmf_create_subsystem", 00:05:28.299 "nvmf_get_subsystems", 00:05:28.299 "env_dpdk_get_mem_stats", 00:05:28.299 "nbd_get_disks", 00:05:28.299 "nbd_stop_disk", 00:05:28.299 "nbd_start_disk", 00:05:28.299 "ublk_recover_disk", 00:05:28.299 "ublk_get_disks", 00:05:28.299 "ublk_stop_disk", 00:05:28.299 "ublk_start_disk", 00:05:28.299 "ublk_destroy_target", 00:05:28.299 "ublk_create_target", 00:05:28.299 "virtio_blk_create_transport", 00:05:28.299 "virtio_blk_get_transports", 00:05:28.299 "vhost_controller_set_coalescing", 00:05:28.299 "vhost_get_controllers", 00:05:28.299 "vhost_delete_controller", 00:05:28.299 "vhost_create_blk_controller", 00:05:28.299 "vhost_scsi_controller_remove_target", 00:05:28.299 "vhost_scsi_controller_add_target", 00:05:28.299 "vhost_start_scsi_controller", 00:05:28.299 "vhost_create_scsi_controller", 00:05:28.299 "thread_set_cpumask", 00:05:28.299 "framework_get_scheduler", 00:05:28.299 "framework_set_scheduler", 00:05:28.299 "framework_get_reactors", 00:05:28.299 "thread_get_io_channels", 00:05:28.299 "thread_get_pollers", 00:05:28.299 "thread_get_stats", 00:05:28.299 "framework_monitor_context_switch", 00:05:28.299 "spdk_kill_instance", 00:05:28.299 "log_enable_timestamps", 00:05:28.299 "log_get_flags", 00:05:28.299 "log_clear_flag", 00:05:28.299 "log_set_flag", 00:05:28.299 "log_get_level", 00:05:28.299 "log_set_level", 00:05:28.299 "log_get_print_level", 00:05:28.299 "log_set_print_level", 00:05:28.299 "framework_enable_cpumask_locks", 00:05:28.299 "framework_disable_cpumask_locks", 00:05:28.299 "framework_wait_init", 00:05:28.299 "framework_start_init", 00:05:28.299 "scsi_get_devices", 00:05:28.299 "bdev_get_histogram", 00:05:28.299 "bdev_enable_histogram", 00:05:28.299 "bdev_set_qos_limit", 00:05:28.299 "bdev_set_qd_sampling_period", 00:05:28.299 "bdev_get_bdevs", 00:05:28.299 "bdev_reset_iostat", 00:05:28.299 "bdev_get_iostat", 00:05:28.299 "bdev_examine", 00:05:28.299 "bdev_wait_for_examine", 00:05:28.299 "bdev_set_options", 00:05:28.299 "notify_get_notifications", 00:05:28.299 "notify_get_types", 00:05:28.299 "accel_get_stats", 00:05:28.299 "accel_set_options", 00:05:28.299 "accel_set_driver", 00:05:28.299 "accel_crypto_key_destroy", 00:05:28.299 "accel_crypto_keys_get", 00:05:28.299 "accel_crypto_key_create", 00:05:28.299 "accel_assign_opc", 00:05:28.299 "accel_get_module_info", 00:05:28.299 "accel_get_opc_assignments", 00:05:28.299 "vmd_rescan", 00:05:28.299 "vmd_remove_device", 00:05:28.299 "vmd_enable", 00:05:28.299 "sock_get_default_impl", 00:05:28.299 "sock_set_default_impl", 00:05:28.299 "sock_impl_set_options", 00:05:28.299 "sock_impl_get_options", 00:05:28.299 "iobuf_get_stats", 00:05:28.299 "iobuf_set_options", 00:05:28.299 "keyring_get_keys", 00:05:28.299 "framework_get_pci_devices", 00:05:28.299 "framework_get_config", 00:05:28.299 "framework_get_subsystems", 00:05:28.299 "vfu_tgt_set_base_path", 00:05:28.299 "trace_get_info", 00:05:28.300 "trace_get_tpoint_group_mask", 00:05:28.300 "trace_disable_tpoint_group", 00:05:28.300 "trace_enable_tpoint_group", 00:05:28.300 "trace_clear_tpoint_mask", 00:05:28.300 "trace_set_tpoint_mask", 00:05:28.300 "spdk_get_version", 00:05:28.300 "rpc_get_methods" 00:05:28.300 ] 00:05:28.300 10:51:24 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:28.300 10:51:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.300 10:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:28.300 10:51:24 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:28.300 10:51:24 -- spdkcli/tcp.sh@38 -- # killprocess 135833 00:05:28.300 10:51:24 -- common/autotest_common.sh@946 -- # '[' -z 135833 ']' 00:05:28.300 10:51:24 -- common/autotest_common.sh@950 -- # kill -0 135833 00:05:28.300 10:51:24 -- common/autotest_common.sh@951 -- # uname 00:05:28.300 10:51:24 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.300 10:51:24 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135833 00:05:28.300 10:51:24 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.300 10:51:24 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.300 10:51:24 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135833' 00:05:28.300 killing process with pid 135833 00:05:28.300 10:51:24 -- common/autotest_common.sh@965 -- # kill 135833 00:05:28.300 10:51:24 -- common/autotest_common.sh@970 -- # wait 135833 00:05:28.562 00:05:28.562 real 0m1.371s 00:05:28.562 user 0m2.567s 00:05:28.562 sys 0m0.414s 00:05:28.562 10:51:25 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.562 10:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:28.562 ************************************ 00:05:28.562 END TEST spdkcli_tcp 00:05:28.562 ************************************ 00:05:28.562 10:51:25 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.562 10:51:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.562 10:51:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.562 10:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:28.562 ************************************ 00:05:28.562 START TEST dpdk_mem_utility 00:05:28.562 ************************************ 00:05:28.563 10:51:25 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.824 * Looking for test storage... 00:05:28.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:28.824 10:51:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.824 10:51:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=136190 00:05:28.824 10:51:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 136190 00:05:28.824 10:51:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.824 10:51:25 -- common/autotest_common.sh@827 -- # '[' -z 136190 ']' 00:05:28.824 10:51:25 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.824 10:51:25 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.824 10:51:25 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.824 10:51:25 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.824 10:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:28.824 [2024-05-15 10:51:25.344340] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:28.824 [2024-05-15 10:51:25.344412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136190 ] 00:05:28.824 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.824 [2024-05-15 10:51:25.425497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.087 [2024-05-15 10:51:25.488255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.660 10:51:26 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.660 10:51:26 -- common/autotest_common.sh@860 -- # return 0 00:05:29.660 10:51:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:29.660 10:51:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:29.660 10:51:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.660 10:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.660 { 00:05:29.660 "filename": "/tmp/spdk_mem_dump.txt" 00:05:29.660 } 00:05:29.660 10:51:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.660 10:51:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:29.660 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:29.660 1 heaps totaling size 814.000000 MiB 00:05:29.660 size: 814.000000 MiB heap id: 0 00:05:29.660 end heaps---------- 00:05:29.660 8 mempools totaling size 598.116089 MiB 00:05:29.660 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:29.660 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:29.660 size: 84.521057 MiB name: bdev_io_136190 00:05:29.660 size: 51.011292 MiB name: evtpool_136190 00:05:29.660 size: 50.003479 MiB name: msgpool_136190 00:05:29.660 size: 21.763794 MiB name: PDU_Pool 00:05:29.660 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:29.660 size: 0.026123 MiB name: Session_Pool 00:05:29.660 end mempools------- 00:05:29.660 6 memzones totaling size 4.142822 MiB 00:05:29.660 size: 1.000366 MiB name: RG_ring_0_136190 00:05:29.660 size: 1.000366 MiB name: RG_ring_1_136190 00:05:29.660 size: 1.000366 MiB name: RG_ring_4_136190 00:05:29.660 size: 1.000366 MiB name: RG_ring_5_136190 00:05:29.660 size: 0.125366 MiB name: RG_ring_2_136190 00:05:29.660 size: 0.015991 MiB name: RG_ring_3_136190 00:05:29.660 end memzones------- 00:05:29.660 10:51:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:29.660 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:29.660 list of free elements. size: 12.519348 MiB 00:05:29.660 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:29.660 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:29.660 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:29.660 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:29.660 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:29.660 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:29.660 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:29.660 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:29.660 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:29.660 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:29.660 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:29.660 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:29.660 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:29.660 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:29.660 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:29.660 list of standard malloc elements. size: 199.218079 MiB 00:05:29.660 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:29.660 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:29.660 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:29.660 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:29.660 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:29.660 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:29.660 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:29.660 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:29.660 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:29.660 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:29.660 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:29.660 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:29.661 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:29.661 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:29.661 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:29.661 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:29.661 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:29.661 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:29.661 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:29.661 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:29.661 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:29.661 list of memzone associated elements. size: 602.262573 MiB 00:05:29.661 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:29.661 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.661 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:29.661 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.661 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:29.661 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_136190_0 00:05:29.661 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:29.661 associated memzone info: size: 48.002930 MiB name: MP_evtpool_136190_0 00:05:29.661 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:29.661 associated memzone info: size: 48.002930 MiB name: MP_msgpool_136190_0 00:05:29.661 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:29.661 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.661 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:29.661 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.661 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:29.661 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_136190 00:05:29.661 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:29.661 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_136190 00:05:29.661 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:29.661 associated memzone info: size: 1.007996 MiB name: MP_evtpool_136190 00:05:29.661 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:29.661 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.661 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:29.661 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.661 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:29.661 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.661 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:29.661 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.661 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:29.661 associated memzone info: size: 1.000366 MiB name: RG_ring_0_136190 00:05:29.661 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:29.661 associated memzone info: size: 1.000366 MiB name: RG_ring_1_136190 00:05:29.661 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:29.661 associated memzone info: size: 1.000366 MiB name: RG_ring_4_136190 00:05:29.661 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:29.661 associated memzone info: size: 1.000366 MiB name: RG_ring_5_136190 00:05:29.661 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:29.661 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_136190 00:05:29.661 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:29.661 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.661 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:29.661 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.661 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:29.661 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.661 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:29.661 associated memzone info: size: 0.125366 MiB name: RG_ring_2_136190 00:05:29.661 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:29.661 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.661 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:29.661 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.661 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:29.661 associated memzone info: size: 0.015991 MiB name: RG_ring_3_136190 00:05:29.661 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:29.661 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.661 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:29.661 associated memzone info: size: 0.000183 MiB name: MP_msgpool_136190 00:05:29.661 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:29.661 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_136190 00:05:29.661 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:29.661 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.661 10:51:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.661 10:51:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 136190 00:05:29.661 10:51:26 -- common/autotest_common.sh@946 -- # '[' -z 136190 ']' 00:05:29.661 10:51:26 -- common/autotest_common.sh@950 -- # kill -0 136190 00:05:29.661 10:51:26 -- common/autotest_common.sh@951 -- # uname 00:05:29.661 10:51:26 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.661 10:51:26 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 136190 00:05:29.661 10:51:26 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.661 10:51:26 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.661 10:51:26 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 136190' 00:05:29.661 killing process with pid 136190 00:05:29.661 10:51:26 -- common/autotest_common.sh@965 -- # kill 136190 00:05:29.661 10:51:26 -- common/autotest_common.sh@970 -- # wait 136190 00:05:29.923 00:05:29.923 real 0m1.267s 00:05:29.923 user 0m1.353s 00:05:29.923 sys 0m0.366s 00:05:29.923 10:51:26 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.923 10:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.923 ************************************ 00:05:29.923 END TEST dpdk_mem_utility 00:05:29.923 ************************************ 00:05:29.923 10:51:26 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:29.923 10:51:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.923 10:51:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.923 10:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.923 ************************************ 00:05:29.923 START TEST event 00:05:29.923 ************************************ 00:05:29.923 10:51:26 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.185 * Looking for test storage... 00:05:30.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.185 10:51:26 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:30.185 10:51:26 -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.185 10:51:26 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.185 10:51:26 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:30.185 10:51:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.185 10:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:30.185 ************************************ 00:05:30.185 START TEST event_perf 00:05:30.185 ************************************ 00:05:30.185 10:51:26 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.185 Running I/O for 1 seconds...[2024-05-15 10:51:26.697080] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:30.185 [2024-05-15 10:51:26.697166] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136425 ] 00:05:30.185 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.185 [2024-05-15 10:51:26.781221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.446 [2024-05-15 10:51:26.854386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.446 [2024-05-15 10:51:26.854516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.447 [2024-05-15 10:51:26.854672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.447 [2024-05-15 10:51:26.854777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.397 Running I/O for 1 seconds... 00:05:31.397 lcore 0: 169464 00:05:31.397 lcore 1: 169467 00:05:31.397 lcore 2: 169466 00:05:31.397 lcore 3: 169466 00:05:31.397 done. 00:05:31.397 00:05:31.397 real 0m1.222s 00:05:31.397 user 0m4.122s 00:05:31.397 sys 0m0.096s 00:05:31.397 10:51:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.397 10:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.397 ************************************ 00:05:31.397 END TEST event_perf 00:05:31.397 ************************************ 00:05:31.397 10:51:27 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:31.397 10:51:27 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:31.397 10:51:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.397 10:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.397 ************************************ 00:05:31.397 START TEST event_reactor 00:05:31.397 ************************************ 00:05:31.397 10:51:27 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:31.397 [2024-05-15 10:51:27.999927] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:31.397 [2024-05-15 10:51:28.000023] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136672 ] 00:05:31.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.659 [2024-05-15 10:51:28.080829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.659 [2024-05-15 10:51:28.137370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.602 test_start 00:05:32.602 oneshot 00:05:32.602 tick 100 00:05:32.602 tick 100 00:05:32.602 tick 250 00:05:32.602 tick 100 00:05:32.602 tick 100 00:05:32.602 tick 100 00:05:32.602 tick 250 00:05:32.602 tick 500 00:05:32.602 tick 100 00:05:32.602 tick 100 00:05:32.602 tick 250 00:05:32.602 tick 100 00:05:32.602 tick 100 00:05:32.602 test_end 00:05:32.602 00:05:32.602 real 0m1.202s 00:05:32.602 user 0m1.116s 00:05:32.602 sys 0m0.081s 00:05:32.602 10:51:29 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.602 10:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:32.602 ************************************ 00:05:32.602 END TEST event_reactor 00:05:32.602 ************************************ 00:05:32.602 10:51:29 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.602 10:51:29 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:32.602 10:51:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.602 10:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:32.863 ************************************ 00:05:32.863 START TEST event_reactor_perf 00:05:32.863 ************************************ 00:05:32.863 10:51:29 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.863 [2024-05-15 10:51:29.283520] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:32.863 [2024-05-15 10:51:29.283617] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137022 ] 00:05:32.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.863 [2024-05-15 10:51:29.364373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.863 [2024-05-15 10:51:29.423255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.249 test_start 00:05:34.249 test_end 00:05:34.249 Performance: 532853 events per second 00:05:34.249 00:05:34.249 real 0m1.205s 00:05:34.249 user 0m1.117s 00:05:34.249 sys 0m0.084s 00:05:34.249 10:51:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.249 10:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:34.249 ************************************ 00:05:34.249 END TEST event_reactor_perf 00:05:34.249 ************************************ 00:05:34.249 10:51:30 -- event/event.sh@49 -- # uname -s 00:05:34.249 10:51:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.249 10:51:30 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.249 10:51:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.249 10:51:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.249 10:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:34.249 ************************************ 00:05:34.249 START TEST event_scheduler 00:05:34.249 ************************************ 00:05:34.249 10:51:30 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.249 * Looking for test storage... 00:05:34.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:34.249 10:51:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.249 10:51:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=137405 00:05:34.249 10:51:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.249 10:51:30 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.249 10:51:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 137405 00:05:34.249 10:51:30 -- common/autotest_common.sh@827 -- # '[' -z 137405 ']' 00:05:34.249 10:51:30 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.249 10:51:30 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.249 10:51:30 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.249 10:51:30 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.249 10:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:34.249 [2024-05-15 10:51:30.702327] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:34.249 [2024-05-15 10:51:30.702399] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137405 ] 00:05:34.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.249 [2024-05-15 10:51:30.782538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.249 [2024-05-15 10:51:30.876285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.249 [2024-05-15 10:51:30.876448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.249 [2024-05-15 10:51:30.876614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.249 [2024-05-15 10:51:30.876614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.194 10:51:31 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:35.194 10:51:31 -- common/autotest_common.sh@860 -- # return 0 00:05:35.194 10:51:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.194 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.194 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.194 POWER: Env isn't set yet! 00:05:35.194 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:35.194 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.194 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.194 POWER: Attempting to initialise PSTAT power management... 00:05:35.194 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:35.194 POWER: Initialized successfully for lcore 0 power management 00:05:35.194 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:35.194 POWER: Initialized successfully for lcore 1 power management 00:05:35.194 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:35.194 POWER: Initialized successfully for lcore 2 power management 00:05:35.195 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:35.195 POWER: Initialized successfully for lcore 3 power management 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 [2024-05-15 10:51:31.604774] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.195 10:51:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.195 10:51:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 ************************************ 00:05:35.195 START TEST scheduler_create_thread 00:05:35.195 ************************************ 00:05:35.195 10:51:31 -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 2 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 3 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 4 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 5 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 6 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 7 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 8 00:05:35.195 10:51:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.195 10:51:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.195 10:51:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.195 10:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.769 9 00:05:35.769 10:51:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.769 10:51:32 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.769 10:51:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.769 10:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:36.718 10 00:05:36.718 10:51:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.718 10:51:33 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:36.718 10:51:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.718 10:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:37.670 10:51:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.670 10:51:34 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.670 10:51:34 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.670 10:51:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.670 10:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:38.244 10:51:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.244 10:51:34 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.244 10:51:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.244 10:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:39.190 10:51:35 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.190 10:51:35 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.190 10:51:35 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.190 10:51:35 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.190 10:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:39.764 10:51:36 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.764 00:05:39.764 real 0m4.465s 00:05:39.764 user 0m0.026s 00:05:39.764 sys 0m0.004s 00:05:39.764 10:51:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.764 10:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.764 ************************************ 00:05:39.764 END TEST scheduler_create_thread 00:05:39.764 ************************************ 00:05:39.764 10:51:36 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.764 10:51:36 -- scheduler/scheduler.sh@46 -- # killprocess 137405 00:05:39.764 10:51:36 -- common/autotest_common.sh@946 -- # '[' -z 137405 ']' 00:05:39.764 10:51:36 -- common/autotest_common.sh@950 -- # kill -0 137405 00:05:39.764 10:51:36 -- common/autotest_common.sh@951 -- # uname 00:05:39.764 10:51:36 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.764 10:51:36 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 137405 00:05:39.764 10:51:36 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:39.764 10:51:36 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:39.764 10:51:36 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 137405' 00:05:39.764 killing process with pid 137405 00:05:39.764 10:51:36 -- common/autotest_common.sh@965 -- # kill 137405 00:05:39.764 10:51:36 -- common/autotest_common.sh@970 -- # wait 137405 00:05:39.764 [2024-05-15 10:51:36.391568] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.026 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:40.026 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:40.026 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:40.026 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:40.026 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:40.026 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:40.026 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:40.026 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:40.026 00:05:40.026 real 0m5.999s 00:05:40.026 user 0m14.221s 00:05:40.026 sys 0m0.384s 00:05:40.026 10:51:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.026 10:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:40.026 ************************************ 00:05:40.026 END TEST event_scheduler 00:05:40.026 ************************************ 00:05:40.026 10:51:36 -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.026 10:51:36 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.026 10:51:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.026 10:51:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.026 10:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:40.026 ************************************ 00:05:40.026 START TEST app_repeat 00:05:40.026 ************************************ 00:05:40.026 10:51:36 -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:40.026 10:51:36 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.026 10:51:36 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.026 10:51:36 -- event/event.sh@13 -- # local nbd_list 00:05:40.026 10:51:36 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.026 10:51:36 -- event/event.sh@14 -- # local bdev_list 00:05:40.026 10:51:36 -- event/event.sh@15 -- # local repeat_times=4 00:05:40.026 10:51:36 -- event/event.sh@17 -- # modprobe nbd 00:05:40.026 10:51:36 -- event/event.sh@19 -- # repeat_pid=138517 00:05:40.026 10:51:36 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.026 10:51:36 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.026 10:51:36 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 138517' 00:05:40.026 Process app_repeat pid: 138517 00:05:40.026 10:51:36 -- event/event.sh@23 -- # for i in {0..2} 00:05:40.026 10:51:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.026 spdk_app_start Round 0 00:05:40.026 10:51:36 -- event/event.sh@25 -- # waitforlisten 138517 /var/tmp/spdk-nbd.sock 00:05:40.026 10:51:36 -- common/autotest_common.sh@827 -- # '[' -z 138517 ']' 00:05:40.026 10:51:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.026 10:51:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.026 10:51:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.026 10:51:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.026 10:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:40.026 [2024-05-15 10:51:36.676806] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:40.026 [2024-05-15 10:51:36.676865] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138517 ] 00:05:40.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.287 [2024-05-15 10:51:36.739828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.288 [2024-05-15 10:51:36.810207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.288 [2024-05-15 10:51:36.810210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.860 10:51:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.860 10:51:37 -- common/autotest_common.sh@860 -- # return 0 00:05:40.860 10:51:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.121 Malloc0 00:05:41.121 10:51:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.384 Malloc1 00:05:41.384 10:51:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@12 -- # local i 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.384 /dev/nbd0 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.384 10:51:37 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:41.384 10:51:37 -- common/autotest_common.sh@865 -- # local i 00:05:41.384 10:51:37 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:41.384 10:51:37 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:41.384 10:51:37 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:41.384 10:51:37 -- common/autotest_common.sh@869 -- # break 00:05:41.384 10:51:37 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:41.384 10:51:37 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:41.384 10:51:37 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.384 1+0 records in 00:05:41.384 1+0 records out 00:05:41.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253826 s, 16.1 MB/s 00:05:41.384 10:51:37 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.384 10:51:37 -- common/autotest_common.sh@882 -- # size=4096 00:05:41.384 10:51:37 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.384 10:51:37 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:41.384 10:51:37 -- common/autotest_common.sh@885 -- # return 0 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.384 10:51:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.645 /dev/nbd1 00:05:41.645 10:51:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.645 10:51:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.645 10:51:38 -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:41.645 10:51:38 -- common/autotest_common.sh@865 -- # local i 00:05:41.645 10:51:38 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:41.645 10:51:38 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:41.645 10:51:38 -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:41.645 10:51:38 -- common/autotest_common.sh@869 -- # break 00:05:41.645 10:51:38 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:41.645 10:51:38 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:41.645 10:51:38 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.645 1+0 records in 00:05:41.645 1+0 records out 00:05:41.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331322 s, 12.4 MB/s 00:05:41.645 10:51:38 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.645 10:51:38 -- common/autotest_common.sh@882 -- # size=4096 00:05:41.645 10:51:38 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.645 10:51:38 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:41.645 10:51:38 -- common/autotest_common.sh@885 -- # return 0 00:05:41.645 10:51:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.645 10:51:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.645 10:51:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.645 10:51:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.645 10:51:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.908 { 00:05:41.908 "nbd_device": "/dev/nbd0", 00:05:41.908 "bdev_name": "Malloc0" 00:05:41.908 }, 00:05:41.908 { 00:05:41.908 "nbd_device": "/dev/nbd1", 00:05:41.908 "bdev_name": "Malloc1" 00:05:41.908 } 00:05:41.908 ]' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.908 { 00:05:41.908 "nbd_device": "/dev/nbd0", 00:05:41.908 "bdev_name": "Malloc0" 00:05:41.908 }, 00:05:41.908 { 00:05:41.908 "nbd_device": "/dev/nbd1", 00:05:41.908 "bdev_name": "Malloc1" 00:05:41.908 } 00:05:41.908 ]' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.908 /dev/nbd1' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.908 /dev/nbd1' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.908 256+0 records in 00:05:41.908 256+0 records out 00:05:41.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124765 s, 84.0 MB/s 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.908 256+0 records in 00:05:41.908 256+0 records out 00:05:41.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0435744 s, 24.1 MB/s 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.908 256+0 records in 00:05:41.908 256+0 records out 00:05:41.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166983 s, 62.8 MB/s 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@51 -- # local i 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.908 10:51:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@41 -- # break 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.170 10:51:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@41 -- # break 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.432 10:51:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@65 -- # true 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.432 10:51:39 -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.432 10:51:39 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.694 10:51:39 -- event/event.sh@35 -- # sleep 3 00:05:42.954 [2024-05-15 10:51:39.348328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.954 [2024-05-15 10:51:39.410945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.954 [2024-05-15 10:51:39.410947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.954 [2024-05-15 10:51:39.442704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.954 [2024-05-15 10:51:39.442739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.267 10:51:42 -- event/event.sh@23 -- # for i in {0..2} 00:05:46.267 10:51:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:46.267 spdk_app_start Round 1 00:05:46.267 10:51:42 -- event/event.sh@25 -- # waitforlisten 138517 /var/tmp/spdk-nbd.sock 00:05:46.267 10:51:42 -- common/autotest_common.sh@827 -- # '[' -z 138517 ']' 00:05:46.267 10:51:42 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.267 10:51:42 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.268 10:51:42 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.268 10:51:42 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.268 10:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.268 10:51:42 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.268 10:51:42 -- common/autotest_common.sh@860 -- # return 0 00:05:46.268 10:51:42 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.268 Malloc0 00:05:46.268 10:51:42 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.268 Malloc1 00:05:46.268 10:51:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@12 -- # local i 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.268 /dev/nbd0 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.268 10:51:42 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:46.268 10:51:42 -- common/autotest_common.sh@865 -- # local i 00:05:46.268 10:51:42 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:46.268 10:51:42 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:46.268 10:51:42 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:46.268 10:51:42 -- common/autotest_common.sh@869 -- # break 00:05:46.268 10:51:42 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:46.268 10:51:42 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:46.268 10:51:42 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.268 1+0 records in 00:05:46.268 1+0 records out 00:05:46.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000120388 s, 34.0 MB/s 00:05:46.268 10:51:42 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.268 10:51:42 -- common/autotest_common.sh@882 -- # size=4096 00:05:46.268 10:51:42 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.268 10:51:42 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:46.268 10:51:42 -- common/autotest_common.sh@885 -- # return 0 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.268 10:51:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.528 /dev/nbd1 00:05:46.528 10:51:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.528 10:51:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.528 10:51:43 -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:46.528 10:51:43 -- common/autotest_common.sh@865 -- # local i 00:05:46.528 10:51:43 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:46.528 10:51:43 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:46.528 10:51:43 -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:46.528 10:51:43 -- common/autotest_common.sh@869 -- # break 00:05:46.528 10:51:43 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:46.528 10:51:43 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:46.528 10:51:43 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.528 1+0 records in 00:05:46.528 1+0 records out 00:05:46.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328449 s, 12.5 MB/s 00:05:46.528 10:51:43 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.528 10:51:43 -- common/autotest_common.sh@882 -- # size=4096 00:05:46.528 10:51:43 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.528 10:51:43 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:46.528 10:51:43 -- common/autotest_common.sh@885 -- # return 0 00:05:46.528 10:51:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.528 10:51:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.528 10:51:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.528 10:51:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.528 10:51:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.789 { 00:05:46.789 "nbd_device": "/dev/nbd0", 00:05:46.789 "bdev_name": "Malloc0" 00:05:46.789 }, 00:05:46.789 { 00:05:46.789 "nbd_device": "/dev/nbd1", 00:05:46.789 "bdev_name": "Malloc1" 00:05:46.789 } 00:05:46.789 ]' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.789 { 00:05:46.789 "nbd_device": "/dev/nbd0", 00:05:46.789 "bdev_name": "Malloc0" 00:05:46.789 }, 00:05:46.789 { 00:05:46.789 "nbd_device": "/dev/nbd1", 00:05:46.789 "bdev_name": "Malloc1" 00:05:46.789 } 00:05:46.789 ]' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.789 /dev/nbd1' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.789 /dev/nbd1' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.789 256+0 records in 00:05:46.789 256+0 records out 00:05:46.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119874 s, 87.5 MB/s 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.789 256+0 records in 00:05:46.789 256+0 records out 00:05:46.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166585 s, 62.9 MB/s 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.789 256+0 records in 00:05:46.789 256+0 records out 00:05:46.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172819 s, 60.7 MB/s 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@51 -- # local i 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.789 10:51:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@41 -- # break 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@41 -- # break 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.049 10:51:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@65 -- # true 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.309 10:51:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.309 10:51:43 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.571 10:51:44 -- event/event.sh@35 -- # sleep 3 00:05:47.571 [2024-05-15 10:51:44.169551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.830 [2024-05-15 10:51:44.231809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.830 [2024-05-15 10:51:44.231811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.830 [2024-05-15 10:51:44.264239] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.830 [2024-05-15 10:51:44.264273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.139 10:51:47 -- event/event.sh@23 -- # for i in {0..2} 00:05:51.139 10:51:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:51.139 spdk_app_start Round 2 00:05:51.139 10:51:47 -- event/event.sh@25 -- # waitforlisten 138517 /var/tmp/spdk-nbd.sock 00:05:51.139 10:51:47 -- common/autotest_common.sh@827 -- # '[' -z 138517 ']' 00:05:51.139 10:51:47 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.139 10:51:47 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.139 10:51:47 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.139 10:51:47 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.139 10:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.139 10:51:47 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.139 10:51:47 -- common/autotest_common.sh@860 -- # return 0 00:05:51.139 10:51:47 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.139 Malloc0 00:05:51.139 10:51:47 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.139 Malloc1 00:05:51.139 10:51:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@12 -- # local i 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.139 /dev/nbd0 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.139 10:51:47 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:51.139 10:51:47 -- common/autotest_common.sh@865 -- # local i 00:05:51.139 10:51:47 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:51.139 10:51:47 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:51.139 10:51:47 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:51.139 10:51:47 -- common/autotest_common.sh@869 -- # break 00:05:51.139 10:51:47 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:51.139 10:51:47 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:51.139 10:51:47 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.139 1+0 records in 00:05:51.139 1+0 records out 00:05:51.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216068 s, 19.0 MB/s 00:05:51.139 10:51:47 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.139 10:51:47 -- common/autotest_common.sh@882 -- # size=4096 00:05:51.139 10:51:47 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.139 10:51:47 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:51.139 10:51:47 -- common/autotest_common.sh@885 -- # return 0 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.139 10:51:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.400 /dev/nbd1 00:05:51.400 10:51:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.400 10:51:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.400 10:51:47 -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:51.400 10:51:47 -- common/autotest_common.sh@865 -- # local i 00:05:51.400 10:51:47 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:51.400 10:51:47 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:51.400 10:51:47 -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:51.400 10:51:47 -- common/autotest_common.sh@869 -- # break 00:05:51.400 10:51:47 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:51.400 10:51:47 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:51.400 10:51:47 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.400 1+0 records in 00:05:51.400 1+0 records out 00:05:51.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281505 s, 14.6 MB/s 00:05:51.400 10:51:47 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.400 10:51:47 -- common/autotest_common.sh@882 -- # size=4096 00:05:51.400 10:51:47 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.400 10:51:47 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:51.400 10:51:47 -- common/autotest_common.sh@885 -- # return 0 00:05:51.400 10:51:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.400 10:51:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.400 10:51:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.400 10:51:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.400 10:51:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.400 10:51:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.400 { 00:05:51.400 "nbd_device": "/dev/nbd0", 00:05:51.400 "bdev_name": "Malloc0" 00:05:51.400 }, 00:05:51.400 { 00:05:51.400 "nbd_device": "/dev/nbd1", 00:05:51.400 "bdev_name": "Malloc1" 00:05:51.400 } 00:05:51.400 ]' 00:05:51.400 10:51:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.400 { 00:05:51.400 "nbd_device": "/dev/nbd0", 00:05:51.400 "bdev_name": "Malloc0" 00:05:51.400 }, 00:05:51.400 { 00:05:51.400 "nbd_device": "/dev/nbd1", 00:05:51.400 "bdev_name": "Malloc1" 00:05:51.400 } 00:05:51.400 ]' 00:05:51.400 10:51:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.661 /dev/nbd1' 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.661 /dev/nbd1' 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.661 256+0 records in 00:05:51.661 256+0 records out 00:05:51.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118436 s, 88.5 MB/s 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.661 256+0 records in 00:05:51.661 256+0 records out 00:05:51.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197328 s, 53.1 MB/s 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.661 256+0 records in 00:05:51.661 256+0 records out 00:05:51.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162631 s, 64.5 MB/s 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.661 10:51:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@51 -- # local i 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.662 10:51:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.922 10:51:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.922 10:51:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.922 10:51:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.922 10:51:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.922 10:51:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.922 10:51:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.922 10:51:48 -- bdev/nbd_common.sh@41 -- # break 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@41 -- # break 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.923 10:51:48 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@65 -- # true 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.184 10:51:48 -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.184 10:51:48 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.444 10:51:48 -- event/event.sh@35 -- # sleep 3 00:05:52.444 [2024-05-15 10:51:49.027865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.444 [2024-05-15 10:51:49.090656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.444 [2024-05-15 10:51:49.090744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.706 [2024-05-15 10:51:49.122297] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.706 [2024-05-15 10:51:49.122332] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.259 10:51:51 -- event/event.sh@38 -- # waitforlisten 138517 /var/tmp/spdk-nbd.sock 00:05:55.259 10:51:51 -- common/autotest_common.sh@827 -- # '[' -z 138517 ']' 00:05:55.259 10:51:51 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.259 10:51:51 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.259 10:51:51 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.259 10:51:51 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.259 10:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.520 10:51:52 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.520 10:51:52 -- common/autotest_common.sh@860 -- # return 0 00:05:55.520 10:51:52 -- event/event.sh@39 -- # killprocess 138517 00:05:55.520 10:51:52 -- common/autotest_common.sh@946 -- # '[' -z 138517 ']' 00:05:55.520 10:51:52 -- common/autotest_common.sh@950 -- # kill -0 138517 00:05:55.520 10:51:52 -- common/autotest_common.sh@951 -- # uname 00:05:55.520 10:51:52 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.520 10:51:52 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 138517 00:05:55.520 10:51:52 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.520 10:51:52 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.520 10:51:52 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 138517' 00:05:55.520 killing process with pid 138517 00:05:55.520 10:51:52 -- common/autotest_common.sh@965 -- # kill 138517 00:05:55.520 10:51:52 -- common/autotest_common.sh@970 -- # wait 138517 00:05:55.781 spdk_app_start is called in Round 0. 00:05:55.781 Shutdown signal received, stop current app iteration 00:05:55.781 Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 reinitialization... 00:05:55.781 spdk_app_start is called in Round 1. 00:05:55.781 Shutdown signal received, stop current app iteration 00:05:55.781 Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 reinitialization... 00:05:55.781 spdk_app_start is called in Round 2. 00:05:55.781 Shutdown signal received, stop current app iteration 00:05:55.781 Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 reinitialization... 00:05:55.781 spdk_app_start is called in Round 3. 00:05:55.781 Shutdown signal received, stop current app iteration 00:05:55.781 10:51:52 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.781 10:51:52 -- event/event.sh@42 -- # return 0 00:05:55.781 00:05:55.781 real 0m15.582s 00:05:55.781 user 0m33.609s 00:05:55.781 sys 0m2.072s 00:05:55.781 10:51:52 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.781 10:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:55.781 ************************************ 00:05:55.781 END TEST app_repeat 00:05:55.781 ************************************ 00:05:55.781 10:51:52 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.781 10:51:52 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.781 10:51:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.781 10:51:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.781 10:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:55.781 ************************************ 00:05:55.781 START TEST cpu_locks 00:05:55.781 ************************************ 00:05:55.781 10:51:52 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.781 * Looking for test storage... 00:05:55.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.781 10:51:52 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:55.781 10:51:52 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:55.781 10:51:52 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:55.781 10:51:52 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:55.781 10:51:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.781 10:51:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.781 10:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:56.042 ************************************ 00:05:56.042 START TEST default_locks 00:05:56.042 ************************************ 00:05:56.042 10:51:52 -- common/autotest_common.sh@1121 -- # default_locks 00:05:56.042 10:51:52 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=142056 00:05:56.042 10:51:52 -- event/cpu_locks.sh@47 -- # waitforlisten 142056 00:05:56.042 10:51:52 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.042 10:51:52 -- common/autotest_common.sh@827 -- # '[' -z 142056 ']' 00:05:56.042 10:51:52 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.042 10:51:52 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.042 10:51:52 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.042 10:51:52 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.042 10:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:56.042 [2024-05-15 10:51:52.495588] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:56.042 [2024-05-15 10:51:52.495656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142056 ] 00:05:56.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.042 [2024-05-15 10:51:52.559266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.042 [2024-05-15 10:51:52.632963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.614 10:51:53 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.614 10:51:53 -- common/autotest_common.sh@860 -- # return 0 00:05:56.614 10:51:53 -- event/cpu_locks.sh@49 -- # locks_exist 142056 00:05:56.614 10:51:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.614 10:51:53 -- event/cpu_locks.sh@22 -- # lslocks -p 142056 00:05:57.188 lslocks: write error 00:05:57.188 10:51:53 -- event/cpu_locks.sh@50 -- # killprocess 142056 00:05:57.188 10:51:53 -- common/autotest_common.sh@946 -- # '[' -z 142056 ']' 00:05:57.188 10:51:53 -- common/autotest_common.sh@950 -- # kill -0 142056 00:05:57.188 10:51:53 -- common/autotest_common.sh@951 -- # uname 00:05:57.188 10:51:53 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.188 10:51:53 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142056 00:05:57.450 10:51:53 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:57.450 10:51:53 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:57.450 10:51:53 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142056' 00:05:57.450 killing process with pid 142056 00:05:57.450 10:51:53 -- common/autotest_common.sh@965 -- # kill 142056 00:05:57.450 10:51:53 -- common/autotest_common.sh@970 -- # wait 142056 00:05:57.450 10:51:54 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 142056 00:05:57.450 10:51:54 -- common/autotest_common.sh@648 -- # local es=0 00:05:57.450 10:51:54 -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 142056 00:05:57.450 10:51:54 -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:57.450 10:51:54 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.450 10:51:54 -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:57.450 10:51:54 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.450 10:51:54 -- common/autotest_common.sh@651 -- # waitforlisten 142056 00:05:57.450 10:51:54 -- common/autotest_common.sh@827 -- # '[' -z 142056 ']' 00:05:57.450 10:51:54 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.450 10:51:54 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.450 10:51:54 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.450 10:51:54 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.450 10:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (142056) - No such process 00:05:57.450 ERROR: process (pid: 142056) is no longer running 00:05:57.450 10:51:54 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.450 10:51:54 -- common/autotest_common.sh@860 -- # return 1 00:05:57.450 10:51:54 -- common/autotest_common.sh@651 -- # es=1 00:05:57.450 10:51:54 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.450 10:51:54 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.450 10:51:54 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.450 10:51:54 -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.450 10:51:54 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.450 10:51:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.450 10:51:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.450 00:05:57.451 real 0m1.625s 00:05:57.451 user 0m1.718s 00:05:57.451 sys 0m0.554s 00:05:57.451 10:51:54 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.451 10:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.451 ************************************ 00:05:57.451 END TEST default_locks 00:05:57.451 ************************************ 00:05:57.451 10:51:54 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.451 10:51:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.451 10:51:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.451 10:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.712 ************************************ 00:05:57.712 START TEST default_locks_via_rpc 00:05:57.712 ************************************ 00:05:57.712 10:51:54 -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:57.712 10:51:54 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=142432 00:05:57.712 10:51:54 -- event/cpu_locks.sh@63 -- # waitforlisten 142432 00:05:57.712 10:51:54 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.712 10:51:54 -- common/autotest_common.sh@827 -- # '[' -z 142432 ']' 00:05:57.712 10:51:54 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.712 10:51:54 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.712 10:51:54 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.712 10:51:54 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.712 10:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.712 [2024-05-15 10:51:54.188142] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:57.712 [2024-05-15 10:51:54.188192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142432 ] 00:05:57.712 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.712 [2024-05-15 10:51:54.247942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.712 [2024-05-15 10:51:54.315634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.288 10:51:54 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.288 10:51:54 -- common/autotest_common.sh@860 -- # return 0 00:05:58.288 10:51:54 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.288 10:51:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.288 10:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.550 10:51:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.550 10:51:54 -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.550 10:51:54 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.550 10:51:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.550 10:51:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.550 10:51:54 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.550 10:51:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.550 10:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.550 10:51:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.550 10:51:54 -- event/cpu_locks.sh@71 -- # locks_exist 142432 00:05:58.550 10:51:54 -- event/cpu_locks.sh@22 -- # lslocks -p 142432 00:05:58.550 10:51:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.550 10:51:55 -- event/cpu_locks.sh@73 -- # killprocess 142432 00:05:58.550 10:51:55 -- common/autotest_common.sh@946 -- # '[' -z 142432 ']' 00:05:58.550 10:51:55 -- common/autotest_common.sh@950 -- # kill -0 142432 00:05:58.550 10:51:55 -- common/autotest_common.sh@951 -- # uname 00:05:58.550 10:51:55 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.550 10:51:55 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142432 00:05:58.550 10:51:55 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.550 10:51:55 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.550 10:51:55 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142432' 00:05:58.550 killing process with pid 142432 00:05:58.550 10:51:55 -- common/autotest_common.sh@965 -- # kill 142432 00:05:58.550 10:51:55 -- common/autotest_common.sh@970 -- # wait 142432 00:05:58.813 00:05:58.813 real 0m1.243s 00:05:58.813 user 0m1.336s 00:05:58.813 sys 0m0.382s 00:05:58.813 10:51:55 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.813 10:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.813 ************************************ 00:05:58.813 END TEST default_locks_via_rpc 00:05:58.813 ************************************ 00:05:58.813 10:51:55 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.813 10:51:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.813 10:51:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.813 10:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.813 ************************************ 00:05:58.813 START TEST non_locking_app_on_locked_coremask 00:05:58.813 ************************************ 00:05:58.813 10:51:55 -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:58.813 10:51:55 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=142793 00:05:58.813 10:51:55 -- event/cpu_locks.sh@81 -- # waitforlisten 142793 /var/tmp/spdk.sock 00:05:58.813 10:51:55 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.813 10:51:55 -- common/autotest_common.sh@827 -- # '[' -z 142793 ']' 00:05:58.813 10:51:55 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.813 10:51:55 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.813 10:51:55 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.813 10:51:55 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.813 10:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:59.075 [2024-05-15 10:51:55.512941] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:59.075 [2024-05-15 10:51:55.512992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142793 ] 00:05:59.075 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.075 [2024-05-15 10:51:55.573691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.075 [2024-05-15 10:51:55.643278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.648 10:51:56 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.648 10:51:56 -- common/autotest_common.sh@860 -- # return 0 00:05:59.648 10:51:56 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:59.648 10:51:56 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=142807 00:05:59.648 10:51:56 -- event/cpu_locks.sh@85 -- # waitforlisten 142807 /var/tmp/spdk2.sock 00:05:59.648 10:51:56 -- common/autotest_common.sh@827 -- # '[' -z 142807 ']' 00:05:59.648 10:51:56 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.648 10:51:56 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.648 10:51:56 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.648 10:51:56 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.648 10:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.648 [2024-05-15 10:51:56.293858] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:05:59.649 [2024-05-15 10:51:56.293908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142807 ] 00:05:59.909 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.909 [2024-05-15 10:51:56.382694] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.909 [2024-05-15 10:51:56.382720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.909 [2024-05-15 10:51:56.511759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.483 10:51:57 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.483 10:51:57 -- common/autotest_common.sh@860 -- # return 0 00:06:00.483 10:51:57 -- event/cpu_locks.sh@87 -- # locks_exist 142793 00:06:00.483 10:51:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.483 10:51:57 -- event/cpu_locks.sh@22 -- # lslocks -p 142793 00:06:01.055 lslocks: write error 00:06:01.055 10:51:57 -- event/cpu_locks.sh@89 -- # killprocess 142793 00:06:01.055 10:51:57 -- common/autotest_common.sh@946 -- # '[' -z 142793 ']' 00:06:01.055 10:51:57 -- common/autotest_common.sh@950 -- # kill -0 142793 00:06:01.055 10:51:57 -- common/autotest_common.sh@951 -- # uname 00:06:01.055 10:51:57 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.055 10:51:57 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142793 00:06:01.315 10:51:57 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.315 10:51:57 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.315 10:51:57 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142793' 00:06:01.315 killing process with pid 142793 00:06:01.315 10:51:57 -- common/autotest_common.sh@965 -- # kill 142793 00:06:01.315 10:51:57 -- common/autotest_common.sh@970 -- # wait 142793 00:06:01.577 10:51:58 -- event/cpu_locks.sh@90 -- # killprocess 142807 00:06:01.577 10:51:58 -- common/autotest_common.sh@946 -- # '[' -z 142807 ']' 00:06:01.577 10:51:58 -- common/autotest_common.sh@950 -- # kill -0 142807 00:06:01.577 10:51:58 -- common/autotest_common.sh@951 -- # uname 00:06:01.577 10:51:58 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.577 10:51:58 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142807 00:06:01.577 10:51:58 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.577 10:51:58 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.577 10:51:58 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142807' 00:06:01.577 killing process with pid 142807 00:06:01.577 10:51:58 -- common/autotest_common.sh@965 -- # kill 142807 00:06:01.577 10:51:58 -- common/autotest_common.sh@970 -- # wait 142807 00:06:01.840 00:06:01.840 real 0m2.952s 00:06:01.840 user 0m3.211s 00:06:01.840 sys 0m0.864s 00:06:01.840 10:51:58 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.840 10:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.840 ************************************ 00:06:01.840 END TEST non_locking_app_on_locked_coremask 00:06:01.840 ************************************ 00:06:01.840 10:51:58 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.840 10:51:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.840 10:51:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.840 10:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.840 ************************************ 00:06:01.840 START TEST locking_app_on_unlocked_coremask 00:06:01.840 ************************************ 00:06:01.840 10:51:58 -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:01.840 10:51:58 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=143326 00:06:01.840 10:51:58 -- event/cpu_locks.sh@99 -- # waitforlisten 143326 /var/tmp/spdk.sock 00:06:01.840 10:51:58 -- common/autotest_common.sh@827 -- # '[' -z 143326 ']' 00:06:01.840 10:51:58 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.840 10:51:58 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.840 10:51:58 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.840 10:51:58 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.840 10:51:58 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.840 10:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:02.102 [2024-05-15 10:51:58.524098] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:02.102 [2024-05-15 10:51:58.524149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143326 ] 00:06:02.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.102 [2024-05-15 10:51:58.584579] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.102 [2024-05-15 10:51:58.584609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.102 [2024-05-15 10:51:58.652119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.675 10:51:59 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.675 10:51:59 -- common/autotest_common.sh@860 -- # return 0 00:06:02.676 10:51:59 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.676 10:51:59 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=143512 00:06:02.676 10:51:59 -- event/cpu_locks.sh@103 -- # waitforlisten 143512 /var/tmp/spdk2.sock 00:06:02.676 10:51:59 -- common/autotest_common.sh@827 -- # '[' -z 143512 ']' 00:06:02.676 10:51:59 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.676 10:51:59 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.676 10:51:59 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.676 10:51:59 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.676 10:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:02.676 [2024-05-15 10:51:59.314352] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:02.676 [2024-05-15 10:51:59.314401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143512 ] 00:06:02.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.936 [2024-05-15 10:51:59.400629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.936 [2024-05-15 10:51:59.529652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.507 10:52:00 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.507 10:52:00 -- common/autotest_common.sh@860 -- # return 0 00:06:03.507 10:52:00 -- event/cpu_locks.sh@105 -- # locks_exist 143512 00:06:03.507 10:52:00 -- event/cpu_locks.sh@22 -- # lslocks -p 143512 00:06:03.507 10:52:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.079 lslocks: write error 00:06:04.079 10:52:00 -- event/cpu_locks.sh@107 -- # killprocess 143326 00:06:04.079 10:52:00 -- common/autotest_common.sh@946 -- # '[' -z 143326 ']' 00:06:04.079 10:52:00 -- common/autotest_common.sh@950 -- # kill -0 143326 00:06:04.079 10:52:00 -- common/autotest_common.sh@951 -- # uname 00:06:04.079 10:52:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.079 10:52:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143326 00:06:04.079 10:52:00 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.079 10:52:00 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.079 10:52:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143326' 00:06:04.079 killing process with pid 143326 00:06:04.079 10:52:00 -- common/autotest_common.sh@965 -- # kill 143326 00:06:04.079 10:52:00 -- common/autotest_common.sh@970 -- # wait 143326 00:06:04.651 10:52:01 -- event/cpu_locks.sh@108 -- # killprocess 143512 00:06:04.651 10:52:01 -- common/autotest_common.sh@946 -- # '[' -z 143512 ']' 00:06:04.651 10:52:01 -- common/autotest_common.sh@950 -- # kill -0 143512 00:06:04.651 10:52:01 -- common/autotest_common.sh@951 -- # uname 00:06:04.651 10:52:01 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.651 10:52:01 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143512 00:06:04.651 10:52:01 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.651 10:52:01 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.651 10:52:01 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143512' 00:06:04.651 killing process with pid 143512 00:06:04.651 10:52:01 -- common/autotest_common.sh@965 -- # kill 143512 00:06:04.651 10:52:01 -- common/autotest_common.sh@970 -- # wait 143512 00:06:04.912 00:06:04.912 real 0m2.925s 00:06:04.912 user 0m3.164s 00:06:04.912 sys 0m0.855s 00:06:04.912 10:52:01 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.912 10:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.912 ************************************ 00:06:04.912 END TEST locking_app_on_unlocked_coremask 00:06:04.912 ************************************ 00:06:04.912 10:52:01 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.912 10:52:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.912 10:52:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.912 10:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.912 ************************************ 00:06:04.912 START TEST locking_app_on_locked_coremask 00:06:04.912 ************************************ 00:06:04.912 10:52:01 -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:04.912 10:52:01 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=143891 00:06:04.912 10:52:01 -- event/cpu_locks.sh@116 -- # waitforlisten 143891 /var/tmp/spdk.sock 00:06:04.912 10:52:01 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.912 10:52:01 -- common/autotest_common.sh@827 -- # '[' -z 143891 ']' 00:06:04.912 10:52:01 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.912 10:52:01 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.912 10:52:01 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.912 10:52:01 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.912 10:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.912 [2024-05-15 10:52:01.526436] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:04.912 [2024-05-15 10:52:01.526484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143891 ] 00:06:04.912 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.172 [2024-05-15 10:52:01.586502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.172 [2024-05-15 10:52:01.653753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.745 10:52:02 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.745 10:52:02 -- common/autotest_common.sh@860 -- # return 0 00:06:05.745 10:52:02 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.745 10:52:02 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=144222 00:06:05.745 10:52:02 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 144222 /var/tmp/spdk2.sock 00:06:05.745 10:52:02 -- common/autotest_common.sh@648 -- # local es=0 00:06:05.745 10:52:02 -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 144222 /var/tmp/spdk2.sock 00:06:05.745 10:52:02 -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:05.745 10:52:02 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.745 10:52:02 -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:05.745 10:52:02 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.745 10:52:02 -- common/autotest_common.sh@651 -- # waitforlisten 144222 /var/tmp/spdk2.sock 00:06:05.745 10:52:02 -- common/autotest_common.sh@827 -- # '[' -z 144222 ']' 00:06:05.745 10:52:02 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.745 10:52:02 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.745 10:52:02 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.745 10:52:02 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.745 10:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:05.745 [2024-05-15 10:52:02.318983] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:05.745 [2024-05-15 10:52:02.319030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144222 ] 00:06:05.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.005 [2024-05-15 10:52:02.407949] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 143891 has claimed it. 00:06:06.005 [2024-05-15 10:52:02.407987] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (144222) - No such process 00:06:06.575 ERROR: process (pid: 144222) is no longer running 00:06:06.575 10:52:02 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.575 10:52:02 -- common/autotest_common.sh@860 -- # return 1 00:06:06.575 10:52:02 -- common/autotest_common.sh@651 -- # es=1 00:06:06.575 10:52:02 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.575 10:52:02 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.575 10:52:02 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.575 10:52:02 -- event/cpu_locks.sh@122 -- # locks_exist 143891 00:06:06.575 10:52:02 -- event/cpu_locks.sh@22 -- # lslocks -p 143891 00:06:06.575 10:52:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.835 lslocks: write error 00:06:06.835 10:52:03 -- event/cpu_locks.sh@124 -- # killprocess 143891 00:06:06.835 10:52:03 -- common/autotest_common.sh@946 -- # '[' -z 143891 ']' 00:06:06.835 10:52:03 -- common/autotest_common.sh@950 -- # kill -0 143891 00:06:06.835 10:52:03 -- common/autotest_common.sh@951 -- # uname 00:06:06.835 10:52:03 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.835 10:52:03 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143891 00:06:06.835 10:52:03 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.835 10:52:03 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.835 10:52:03 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143891' 00:06:06.835 killing process with pid 143891 00:06:06.835 10:52:03 -- common/autotest_common.sh@965 -- # kill 143891 00:06:06.835 10:52:03 -- common/autotest_common.sh@970 -- # wait 143891 00:06:07.096 00:06:07.096 real 0m2.185s 00:06:07.096 user 0m2.404s 00:06:07.096 sys 0m0.610s 00:06:07.096 10:52:03 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.096 10:52:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.096 ************************************ 00:06:07.096 END TEST locking_app_on_locked_coremask 00:06:07.096 ************************************ 00:06:07.096 10:52:03 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.096 10:52:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.096 10:52:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.096 10:52:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.096 ************************************ 00:06:07.096 START TEST locking_overlapped_coremask 00:06:07.096 ************************************ 00:06:07.096 10:52:03 -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:07.096 10:52:03 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=144514 00:06:07.096 10:52:03 -- event/cpu_locks.sh@133 -- # waitforlisten 144514 /var/tmp/spdk.sock 00:06:07.096 10:52:03 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.096 10:52:03 -- common/autotest_common.sh@827 -- # '[' -z 144514 ']' 00:06:07.096 10:52:03 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.096 10:52:03 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.096 10:52:03 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.096 10:52:03 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.096 10:52:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.358 [2024-05-15 10:52:03.775196] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:07.358 [2024-05-15 10:52:03.775245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144514 ] 00:06:07.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.358 [2024-05-15 10:52:03.833935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.358 [2024-05-15 10:52:03.899429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.358 [2024-05-15 10:52:03.899543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.358 [2024-05-15 10:52:03.899550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.930 10:52:04 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.930 10:52:04 -- common/autotest_common.sh@860 -- # return 0 00:06:07.930 10:52:04 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=144600 00:06:07.930 10:52:04 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 144600 /var/tmp/spdk2.sock 00:06:07.930 10:52:04 -- common/autotest_common.sh@648 -- # local es=0 00:06:07.930 10:52:04 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.930 10:52:04 -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 144600 /var/tmp/spdk2.sock 00:06:07.930 10:52:04 -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:07.930 10:52:04 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.930 10:52:04 -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:07.930 10:52:04 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.930 10:52:04 -- common/autotest_common.sh@651 -- # waitforlisten 144600 /var/tmp/spdk2.sock 00:06:07.930 10:52:04 -- common/autotest_common.sh@827 -- # '[' -z 144600 ']' 00:06:07.930 10:52:04 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.930 10:52:04 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.930 10:52:04 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.930 10:52:04 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.930 10:52:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.191 [2024-05-15 10:52:04.598823] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:08.191 [2024-05-15 10:52:04.598880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144600 ] 00:06:08.191 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.191 [2024-05-15 10:52:04.668884] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 144514 has claimed it. 00:06:08.191 [2024-05-15 10:52:04.668914] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (144600) - No such process 00:06:08.764 ERROR: process (pid: 144600) is no longer running 00:06:08.764 10:52:05 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.764 10:52:05 -- common/autotest_common.sh@860 -- # return 1 00:06:08.764 10:52:05 -- common/autotest_common.sh@651 -- # es=1 00:06:08.764 10:52:05 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.764 10:52:05 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.764 10:52:05 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.764 10:52:05 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.764 10:52:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.764 10:52:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.764 10:52:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.764 10:52:05 -- event/cpu_locks.sh@141 -- # killprocess 144514 00:06:08.764 10:52:05 -- common/autotest_common.sh@946 -- # '[' -z 144514 ']' 00:06:08.764 10:52:05 -- common/autotest_common.sh@950 -- # kill -0 144514 00:06:08.764 10:52:05 -- common/autotest_common.sh@951 -- # uname 00:06:08.764 10:52:05 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.764 10:52:05 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144514 00:06:08.764 10:52:05 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.764 10:52:05 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.764 10:52:05 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144514' 00:06:08.764 killing process with pid 144514 00:06:08.764 10:52:05 -- common/autotest_common.sh@965 -- # kill 144514 00:06:08.764 10:52:05 -- common/autotest_common.sh@970 -- # wait 144514 00:06:09.025 00:06:09.025 real 0m1.743s 00:06:09.025 user 0m4.955s 00:06:09.025 sys 0m0.364s 00:06:09.025 10:52:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.025 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.025 ************************************ 00:06:09.025 END TEST locking_overlapped_coremask 00:06:09.025 ************************************ 00:06:09.025 10:52:05 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.025 10:52:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.025 10:52:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.025 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.025 ************************************ 00:06:09.025 START TEST locking_overlapped_coremask_via_rpc 00:06:09.025 ************************************ 00:06:09.025 10:52:05 -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:09.025 10:52:05 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=144958 00:06:09.025 10:52:05 -- event/cpu_locks.sh@149 -- # waitforlisten 144958 /var/tmp/spdk.sock 00:06:09.025 10:52:05 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.025 10:52:05 -- common/autotest_common.sh@827 -- # '[' -z 144958 ']' 00:06:09.025 10:52:05 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.025 10:52:05 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.025 10:52:05 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.025 10:52:05 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.025 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.025 [2024-05-15 10:52:05.597862] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:09.025 [2024-05-15 10:52:05.597907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144958 ] 00:06:09.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.025 [2024-05-15 10:52:05.656619] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.025 [2024-05-15 10:52:05.656645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.286 [2024-05-15 10:52:05.722565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.286 [2024-05-15 10:52:05.722709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.286 [2024-05-15 10:52:05.722806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.286 10:52:05 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.286 10:52:05 -- common/autotest_common.sh@860 -- # return 0 00:06:09.286 10:52:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=144963 00:06:09.286 10:52:05 -- event/cpu_locks.sh@153 -- # waitforlisten 144963 /var/tmp/spdk2.sock 00:06:09.286 10:52:05 -- common/autotest_common.sh@827 -- # '[' -z 144963 ']' 00:06:09.286 10:52:05 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.286 10:52:05 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.286 10:52:05 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.286 10:52:05 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.286 10:52:05 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.286 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.547 [2024-05-15 10:52:05.946067] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:09.547 [2024-05-15 10:52:05.946113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144963 ] 00:06:09.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.547 [2024-05-15 10:52:06.016292] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.547 [2024-05-15 10:52:06.016318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.547 [2024-05-15 10:52:06.125610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.547 [2024-05-15 10:52:06.125767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.547 [2024-05-15 10:52:06.125769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.120 10:52:06 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.120 10:52:06 -- common/autotest_common.sh@860 -- # return 0 00:06:10.120 10:52:06 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.120 10:52:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.120 10:52:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.120 10:52:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.120 10:52:06 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.120 10:52:06 -- common/autotest_common.sh@648 -- # local es=0 00:06:10.120 10:52:06 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.120 10:52:06 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:10.120 10:52:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.120 10:52:06 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:10.120 10:52:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.120 10:52:06 -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.120 10:52:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.120 10:52:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.120 [2024-05-15 10:52:06.724607] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 144958 has claimed it. 00:06:10.120 request: 00:06:10.120 { 00:06:10.120 "method": "framework_enable_cpumask_locks", 00:06:10.120 "req_id": 1 00:06:10.120 } 00:06:10.120 Got JSON-RPC error response 00:06:10.120 response: 00:06:10.120 { 00:06:10.120 "code": -32603, 00:06:10.120 "message": "Failed to claim CPU core: 2" 00:06:10.120 } 00:06:10.120 10:52:06 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:10.120 10:52:06 -- common/autotest_common.sh@651 -- # es=1 00:06:10.120 10:52:06 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.120 10:52:06 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.120 10:52:06 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.120 10:52:06 -- event/cpu_locks.sh@158 -- # waitforlisten 144958 /var/tmp/spdk.sock 00:06:10.120 10:52:06 -- common/autotest_common.sh@827 -- # '[' -z 144958 ']' 00:06:10.120 10:52:06 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.120 10:52:06 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.120 10:52:06 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.120 10:52:06 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.120 10:52:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 10:52:06 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.382 10:52:06 -- common/autotest_common.sh@860 -- # return 0 00:06:10.382 10:52:06 -- event/cpu_locks.sh@159 -- # waitforlisten 144963 /var/tmp/spdk2.sock 00:06:10.382 10:52:06 -- common/autotest_common.sh@827 -- # '[' -z 144963 ']' 00:06:10.382 10:52:06 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.382 10:52:06 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.382 10:52:06 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.382 10:52:06 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.382 10:52:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.644 10:52:07 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.644 10:52:07 -- common/autotest_common.sh@860 -- # return 0 00:06:10.644 10:52:07 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.644 10:52:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.644 10:52:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.645 10:52:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.645 00:06:10.645 real 0m1.522s 00:06:10.645 user 0m0.739s 00:06:10.645 sys 0m0.110s 00:06:10.645 10:52:07 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.645 10:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.645 ************************************ 00:06:10.645 END TEST locking_overlapped_coremask_via_rpc 00:06:10.645 ************************************ 00:06:10.645 10:52:07 -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.645 10:52:07 -- event/cpu_locks.sh@15 -- # [[ -z 144958 ]] 00:06:10.645 10:52:07 -- event/cpu_locks.sh@15 -- # killprocess 144958 00:06:10.645 10:52:07 -- common/autotest_common.sh@946 -- # '[' -z 144958 ']' 00:06:10.645 10:52:07 -- common/autotest_common.sh@950 -- # kill -0 144958 00:06:10.645 10:52:07 -- common/autotest_common.sh@951 -- # uname 00:06:10.645 10:52:07 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.645 10:52:07 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144958 00:06:10.645 10:52:07 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.645 10:52:07 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.645 10:52:07 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144958' 00:06:10.645 killing process with pid 144958 00:06:10.645 10:52:07 -- common/autotest_common.sh@965 -- # kill 144958 00:06:10.645 10:52:07 -- common/autotest_common.sh@970 -- # wait 144958 00:06:10.906 10:52:07 -- event/cpu_locks.sh@16 -- # [[ -z 144963 ]] 00:06:10.906 10:52:07 -- event/cpu_locks.sh@16 -- # killprocess 144963 00:06:10.906 10:52:07 -- common/autotest_common.sh@946 -- # '[' -z 144963 ']' 00:06:10.906 10:52:07 -- common/autotest_common.sh@950 -- # kill -0 144963 00:06:10.906 10:52:07 -- common/autotest_common.sh@951 -- # uname 00:06:10.906 10:52:07 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.906 10:52:07 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144963 00:06:10.906 10:52:07 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:10.906 10:52:07 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:10.906 10:52:07 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144963' 00:06:10.906 killing process with pid 144963 00:06:10.906 10:52:07 -- common/autotest_common.sh@965 -- # kill 144963 00:06:10.906 10:52:07 -- common/autotest_common.sh@970 -- # wait 144963 00:06:11.167 10:52:07 -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.167 10:52:07 -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.167 10:52:07 -- event/cpu_locks.sh@15 -- # [[ -z 144958 ]] 00:06:11.167 10:52:07 -- event/cpu_locks.sh@15 -- # killprocess 144958 00:06:11.167 10:52:07 -- common/autotest_common.sh@946 -- # '[' -z 144958 ']' 00:06:11.167 10:52:07 -- common/autotest_common.sh@950 -- # kill -0 144958 00:06:11.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (144958) - No such process 00:06:11.167 10:52:07 -- common/autotest_common.sh@973 -- # echo 'Process with pid 144958 is not found' 00:06:11.167 Process with pid 144958 is not found 00:06:11.167 10:52:07 -- event/cpu_locks.sh@16 -- # [[ -z 144963 ]] 00:06:11.167 10:52:07 -- event/cpu_locks.sh@16 -- # killprocess 144963 00:06:11.167 10:52:07 -- common/autotest_common.sh@946 -- # '[' -z 144963 ']' 00:06:11.167 10:52:07 -- common/autotest_common.sh@950 -- # kill -0 144963 00:06:11.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (144963) - No such process 00:06:11.167 10:52:07 -- common/autotest_common.sh@973 -- # echo 'Process with pid 144963 is not found' 00:06:11.167 Process with pid 144963 is not found 00:06:11.167 10:52:07 -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.167 00:06:11.167 real 0m15.321s 00:06:11.167 user 0m25.658s 00:06:11.167 sys 0m4.552s 00:06:11.167 10:52:07 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.167 10:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.167 ************************************ 00:06:11.167 END TEST cpu_locks 00:06:11.167 ************************************ 00:06:11.167 00:06:11.168 real 0m41.130s 00:06:11.168 user 1m20.084s 00:06:11.168 sys 0m7.640s 00:06:11.168 10:52:07 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.168 10:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.168 ************************************ 00:06:11.168 END TEST event 00:06:11.168 ************************************ 00:06:11.168 10:52:07 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.168 10:52:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.168 10:52:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.168 10:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.168 ************************************ 00:06:11.168 START TEST thread 00:06:11.168 ************************************ 00:06:11.168 10:52:07 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.429 * Looking for test storage... 00:06:11.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:11.429 10:52:07 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.429 10:52:07 -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:11.429 10:52:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.429 10:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.429 ************************************ 00:06:11.429 START TEST thread_poller_perf 00:06:11.429 ************************************ 00:06:11.429 10:52:07 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.429 [2024-05-15 10:52:07.893641] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:11.429 [2024-05-15 10:52:07.893736] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145404 ] 00:06:11.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.429 [2024-05-15 10:52:07.955123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.429 [2024-05-15 10:52:08.018983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.429 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.817 ====================================== 00:06:12.817 busy:2410198814 (cyc) 00:06:12.817 total_run_count: 287000 00:06:12.817 tsc_hz: 2400000000 (cyc) 00:06:12.817 ====================================== 00:06:12.817 poller_cost: 8397 (cyc), 3498 (nsec) 00:06:12.817 00:06:12.817 real 0m1.207s 00:06:12.817 user 0m1.136s 00:06:12.817 sys 0m0.067s 00:06:12.817 10:52:09 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.817 10:52:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.817 ************************************ 00:06:12.817 END TEST thread_poller_perf 00:06:12.817 ************************************ 00:06:12.817 10:52:09 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.818 10:52:09 -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:12.818 10:52:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.818 10:52:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.818 ************************************ 00:06:12.818 START TEST thread_poller_perf 00:06:12.818 ************************************ 00:06:12.818 10:52:09 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.818 [2024-05-15 10:52:09.173056] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:12.818 [2024-05-15 10:52:09.173144] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145752 ] 00:06:12.818 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.818 [2024-05-15 10:52:09.235072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.818 [2024-05-15 10:52:09.298784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.818 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.762 ====================================== 00:06:13.762 busy:2401995144 (cyc) 00:06:13.762 total_run_count: 3809000 00:06:13.762 tsc_hz: 2400000000 (cyc) 00:06:13.762 ====================================== 00:06:13.762 poller_cost: 630 (cyc), 262 (nsec) 00:06:13.762 00:06:13.762 real 0m1.202s 00:06:13.762 user 0m1.128s 00:06:13.762 sys 0m0.071s 00:06:13.762 10:52:10 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.762 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.762 ************************************ 00:06:13.762 END TEST thread_poller_perf 00:06:13.762 ************************************ 00:06:13.762 10:52:10 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.762 00:06:13.762 real 0m2.650s 00:06:13.762 user 0m2.366s 00:06:13.762 sys 0m0.286s 00:06:13.762 10:52:10 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.762 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.762 ************************************ 00:06:13.762 END TEST thread 00:06:13.762 ************************************ 00:06:14.022 10:52:10 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:14.022 10:52:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.022 10:52:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.022 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:14.022 ************************************ 00:06:14.022 START TEST accel 00:06:14.022 ************************************ 00:06:14.022 10:52:10 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:14.022 * Looking for test storage... 00:06:14.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:14.022 10:52:10 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:14.022 10:52:10 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:14.022 10:52:10 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:14.022 10:52:10 -- accel/accel.sh@62 -- # spdk_tgt_pid=146150 00:06:14.022 10:52:10 -- accel/accel.sh@63 -- # waitforlisten 146150 00:06:14.022 10:52:10 -- common/autotest_common.sh@827 -- # '[' -z 146150 ']' 00:06:14.022 10:52:10 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.022 10:52:10 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.022 10:52:10 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:14.022 10:52:10 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.022 10:52:10 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.022 10:52:10 -- accel/accel.sh@61 -- # build_accel_config 00:06:14.022 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:14.022 10:52:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.022 10:52:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.022 10:52:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.022 10:52:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.022 10:52:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.022 10:52:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.022 10:52:10 -- accel/accel.sh@41 -- # jq -r . 00:06:14.022 [2024-05-15 10:52:10.620574] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:14.022 [2024-05-15 10:52:10.620627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146150 ] 00:06:14.022 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.283 [2024-05-15 10:52:10.677881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.283 [2024-05-15 10:52:10.741239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.856 10:52:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.856 10:52:11 -- common/autotest_common.sh@860 -- # return 0 00:06:14.856 10:52:11 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:14.856 10:52:11 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:14.856 10:52:11 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:14.856 10:52:11 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:14.856 10:52:11 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:14.856 10:52:11 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:14.856 10:52:11 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:14.856 10:52:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.856 10:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:14.856 10:52:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.856 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.856 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.856 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.856 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.856 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.856 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.856 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.856 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.856 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.856 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.856 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.856 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.856 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.856 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # IFS== 00:06:14.857 10:52:11 -- accel/accel.sh@72 -- # read -r opc module 00:06:14.857 10:52:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.857 10:52:11 -- accel/accel.sh@75 -- # killprocess 146150 00:06:14.857 10:52:11 -- common/autotest_common.sh@946 -- # '[' -z 146150 ']' 00:06:14.857 10:52:11 -- common/autotest_common.sh@950 -- # kill -0 146150 00:06:14.857 10:52:11 -- common/autotest_common.sh@951 -- # uname 00:06:14.857 10:52:11 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.857 10:52:11 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 146150 00:06:14.857 10:52:11 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.857 10:52:11 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.857 10:52:11 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 146150' 00:06:14.857 killing process with pid 146150 00:06:14.857 10:52:11 -- common/autotest_common.sh@965 -- # kill 146150 00:06:14.857 10:52:11 -- common/autotest_common.sh@970 -- # wait 146150 00:06:15.119 10:52:11 -- accel/accel.sh@76 -- # trap - ERR 00:06:15.119 10:52:11 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:15.119 10:52:11 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:15.119 10:52:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.119 10:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.119 10:52:11 -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:15.119 10:52:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:15.119 10:52:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.119 10:52:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.119 10:52:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.119 10:52:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.119 10:52:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.119 10:52:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.119 10:52:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.119 10:52:11 -- accel/accel.sh@41 -- # jq -r . 00:06:15.119 10:52:11 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.119 10:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.381 10:52:11 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:15.381 10:52:11 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:15.381 10:52:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.381 10:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.381 ************************************ 00:06:15.381 START TEST accel_missing_filename 00:06:15.381 ************************************ 00:06:15.381 10:52:11 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:15.381 10:52:11 -- common/autotest_common.sh@648 -- # local es=0 00:06:15.381 10:52:11 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:15.381 10:52:11 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:15.381 10:52:11 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.381 10:52:11 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:15.381 10:52:11 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.381 10:52:11 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:15.381 10:52:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:15.381 10:52:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.381 10:52:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.381 10:52:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.381 10:52:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.381 10:52:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.381 10:52:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.381 10:52:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.381 10:52:11 -- accel/accel.sh@41 -- # jq -r . 00:06:15.381 [2024-05-15 10:52:11.841671] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:15.381 [2024-05-15 10:52:11.841774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146324 ] 00:06:15.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.381 [2024-05-15 10:52:11.917960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.381 [2024-05-15 10:52:11.987891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.381 [2024-05-15 10:52:12.020136] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.642 [2024-05-15 10:52:12.057145] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:15.642 A filename is required. 00:06:15.642 10:52:12 -- common/autotest_common.sh@651 -- # es=234 00:06:15.642 10:52:12 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.642 10:52:12 -- common/autotest_common.sh@660 -- # es=106 00:06:15.642 10:52:12 -- common/autotest_common.sh@661 -- # case "$es" in 00:06:15.642 10:52:12 -- common/autotest_common.sh@668 -- # es=1 00:06:15.642 10:52:12 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.642 00:06:15.642 real 0m0.299s 00:06:15.642 user 0m0.226s 00:06:15.642 sys 0m0.113s 00:06:15.642 10:52:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.642 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.642 ************************************ 00:06:15.642 END TEST accel_missing_filename 00:06:15.642 ************************************ 00:06:15.642 10:52:12 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.642 10:52:12 -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:15.642 10:52:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.642 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.642 ************************************ 00:06:15.642 START TEST accel_compress_verify 00:06:15.642 ************************************ 00:06:15.642 10:52:12 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.642 10:52:12 -- common/autotest_common.sh@648 -- # local es=0 00:06:15.642 10:52:12 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.642 10:52:12 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:15.642 10:52:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.642 10:52:12 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:15.642 10:52:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.642 10:52:12 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.642 10:52:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.642 10:52:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.642 10:52:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.642 10:52:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.642 10:52:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.642 10:52:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.642 10:52:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.642 10:52:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.642 10:52:12 -- accel/accel.sh@41 -- # jq -r . 00:06:15.642 [2024-05-15 10:52:12.219029] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:15.642 [2024-05-15 10:52:12.219108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146531 ] 00:06:15.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.642 [2024-05-15 10:52:12.281806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.904 [2024-05-15 10:52:12.350930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.904 [2024-05-15 10:52:12.382792] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.904 [2024-05-15 10:52:12.419485] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:15.904 00:06:15.904 Compression does not support the verify option, aborting. 00:06:15.904 10:52:12 -- common/autotest_common.sh@651 -- # es=161 00:06:15.904 10:52:12 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.904 10:52:12 -- common/autotest_common.sh@660 -- # es=33 00:06:15.904 10:52:12 -- common/autotest_common.sh@661 -- # case "$es" in 00:06:15.904 10:52:12 -- common/autotest_common.sh@668 -- # es=1 00:06:15.904 10:52:12 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.904 00:06:15.904 real 0m0.283s 00:06:15.904 user 0m0.214s 00:06:15.904 sys 0m0.110s 00:06:15.904 10:52:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.904 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.904 ************************************ 00:06:15.904 END TEST accel_compress_verify 00:06:15.904 ************************************ 00:06:15.904 10:52:12 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:15.904 10:52:12 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:15.904 10:52:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.904 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:15.904 ************************************ 00:06:15.904 START TEST accel_wrong_workload 00:06:15.904 ************************************ 00:06:15.904 10:52:12 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:15.904 10:52:12 -- common/autotest_common.sh@648 -- # local es=0 00:06:15.904 10:52:12 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:15.904 10:52:12 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:15.904 10:52:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.904 10:52:12 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:15.904 10:52:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.904 10:52:12 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:15.904 10:52:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:16.166 10:52:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.166 10:52:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.166 10:52:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.166 10:52:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.166 10:52:12 -- accel/accel.sh@41 -- # jq -r . 00:06:16.166 Unsupported workload type: foobar 00:06:16.166 [2024-05-15 10:52:12.580392] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:16.166 accel_perf options: 00:06:16.166 [-h help message] 00:06:16.166 [-q queue depth per core] 00:06:16.166 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:16.166 [-T number of threads per core 00:06:16.166 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:16.166 [-t time in seconds] 00:06:16.166 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:16.166 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:16.166 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:16.166 [-l for compress/decompress workloads, name of uncompressed input file 00:06:16.166 [-S for crc32c workload, use this seed value (default 0) 00:06:16.166 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:16.166 [-f for fill workload, use this BYTE value (default 255) 00:06:16.166 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:16.166 [-y verify result if this switch is on] 00:06:16.166 [-a tasks to allocate per core (default: same value as -q)] 00:06:16.166 Can be used to spread operations across a wider range of memory. 00:06:16.166 10:52:12 -- common/autotest_common.sh@651 -- # es=1 00:06:16.166 10:52:12 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.166 10:52:12 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.166 10:52:12 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.166 00:06:16.166 real 0m0.035s 00:06:16.166 user 0m0.021s 00:06:16.166 sys 0m0.014s 00:06:16.166 10:52:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.166 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 ************************************ 00:06:16.166 END TEST accel_wrong_workload 00:06:16.166 ************************************ 00:06:16.166 Error: writing output failed: Broken pipe 00:06:16.166 10:52:12 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:16.166 10:52:12 -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:16.166 10:52:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.166 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 ************************************ 00:06:16.166 START TEST accel_negative_buffers 00:06:16.166 ************************************ 00:06:16.166 10:52:12 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:16.166 10:52:12 -- common/autotest_common.sh@648 -- # local es=0 00:06:16.166 10:52:12 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:16.166 10:52:12 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:16.166 10:52:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.166 10:52:12 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:16.166 10:52:12 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.166 10:52:12 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:16.166 10:52:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:16.166 10:52:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.166 10:52:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.166 10:52:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.166 10:52:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.166 10:52:12 -- accel/accel.sh@41 -- # jq -r . 00:06:16.166 -x option must be non-negative. 00:06:16.166 [2024-05-15 10:52:12.697240] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:16.166 accel_perf options: 00:06:16.166 [-h help message] 00:06:16.166 [-q queue depth per core] 00:06:16.166 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:16.166 [-T number of threads per core 00:06:16.166 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:16.166 [-t time in seconds] 00:06:16.166 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:16.166 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:16.166 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:16.166 [-l for compress/decompress workloads, name of uncompressed input file 00:06:16.166 [-S for crc32c workload, use this seed value (default 0) 00:06:16.166 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:16.166 [-f for fill workload, use this BYTE value (default 255) 00:06:16.166 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:16.166 [-y verify result if this switch is on] 00:06:16.166 [-a tasks to allocate per core (default: same value as -q)] 00:06:16.166 Can be used to spread operations across a wider range of memory. 00:06:16.166 10:52:12 -- common/autotest_common.sh@651 -- # es=1 00:06:16.166 10:52:12 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.166 10:52:12 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.166 10:52:12 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.166 00:06:16.166 real 0m0.035s 00:06:16.166 user 0m0.023s 00:06:16.166 sys 0m0.012s 00:06:16.166 10:52:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.166 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 ************************************ 00:06:16.166 END TEST accel_negative_buffers 00:06:16.166 ************************************ 00:06:16.166 Error: writing output failed: Broken pipe 00:06:16.166 10:52:12 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:16.166 10:52:12 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:16.166 10:52:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.166 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 ************************************ 00:06:16.166 START TEST accel_crc32c 00:06:16.166 ************************************ 00:06:16.166 10:52:12 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:16.166 10:52:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.166 10:52:12 -- accel/accel.sh@17 -- # local accel_module 00:06:16.166 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.166 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.166 10:52:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:16.166 10:52:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:16.166 10:52:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.166 10:52:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.166 10:52:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:52:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.166 10:52:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.166 10:52:12 -- accel/accel.sh@41 -- # jq -r . 00:06:16.166 [2024-05-15 10:52:12.809933] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:16.166 [2024-05-15 10:52:12.810022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146613 ] 00:06:16.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.428 [2024-05-15 10:52:12.875429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.428 [2024-05-15 10:52:12.947827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val= 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val= 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=0x1 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val= 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val= 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=crc32c 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=32 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val= 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=software 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=32 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=32 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=1 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val=Yes 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val= 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 10:52:12 -- accel/accel.sh@20 -- # val= 00:06:16.428 10:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 10:52:12 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.814 10:52:14 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:17.814 10:52:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.814 00:06:17.814 real 0m1.295s 00:06:17.814 user 0m1.198s 00:06:17.814 sys 0m0.108s 00:06:17.814 10:52:14 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.814 10:52:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 ************************************ 00:06:17.814 END TEST accel_crc32c 00:06:17.814 ************************************ 00:06:17.814 10:52:14 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:17.814 10:52:14 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:17.814 10:52:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.814 10:52:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 ************************************ 00:06:17.814 START TEST accel_crc32c_C2 00:06:17.814 ************************************ 00:06:17.814 10:52:14 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:17.814 10:52:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.814 10:52:14 -- accel/accel.sh@17 -- # local accel_module 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.814 10:52:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.814 10:52:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.814 10:52:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.814 10:52:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.814 10:52:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.814 10:52:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.814 10:52:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.814 10:52:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.814 10:52:14 -- accel/accel.sh@41 -- # jq -r . 00:06:17.814 [2024-05-15 10:52:14.183334] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:17.814 [2024-05-15 10:52:14.183434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146961 ] 00:06:17.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.814 [2024-05-15 10:52:14.245999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.814 [2024-05-15 10:52:14.315684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val=0x1 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val=crc32c 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.814 10:52:14 -- accel/accel.sh@20 -- # val=0 00:06:17.814 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.814 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val=software 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val=32 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val=32 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val=1 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val=Yes 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:17.815 10:52:14 -- accel/accel.sh@20 -- # val= 00:06:17.815 10:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # IFS=: 00:06:17.815 10:52:14 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.203 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.203 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.203 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.203 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.203 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.203 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.203 10:52:15 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:19.203 10:52:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.203 00:06:19.203 real 0m1.291s 00:06:19.203 user 0m1.192s 00:06:19.203 sys 0m0.111s 00:06:19.203 10:52:15 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.203 10:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:19.203 ************************************ 00:06:19.203 END TEST accel_crc32c_C2 00:06:19.203 ************************************ 00:06:19.203 10:52:15 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:19.203 10:52:15 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:19.203 10:52:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.203 10:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:19.203 ************************************ 00:06:19.203 START TEST accel_copy 00:06:19.203 ************************************ 00:06:19.203 10:52:15 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:19.203 10:52:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.203 10:52:15 -- accel/accel.sh@17 -- # local accel_module 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.203 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.203 10:52:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:19.203 10:52:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:19.203 10:52:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.203 10:52:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.203 10:52:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.203 10:52:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.203 10:52:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.203 10:52:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.203 10:52:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.203 10:52:15 -- accel/accel.sh@41 -- # jq -r . 00:06:19.203 [2024-05-15 10:52:15.550216] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:19.203 [2024-05-15 10:52:15.550304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147316 ] 00:06:19.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.204 [2024-05-15 10:52:15.610054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.204 [2024-05-15 10:52:15.673807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val=0x1 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val=copy 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val=software 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val=32 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val=32 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val=1 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val=Yes 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:19.204 10:52:15 -- accel/accel.sh@20 -- # val= 00:06:19.204 10:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # IFS=: 00:06:19.204 10:52:15 -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 10:52:16 -- accel/accel.sh@20 -- # val= 00:06:20.145 10:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 10:52:16 -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 10:52:16 -- accel/accel.sh@19 -- # read -r var val 00:06:20.145 10:52:16 -- accel/accel.sh@20 -- # val= 00:06:20.145 10:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.145 10:52:16 -- accel/accel.sh@19 -- # IFS=: 00:06:20.145 10:52:16 -- accel/accel.sh@19 -- # read -r var val 00:06:20.406 10:52:16 -- accel/accel.sh@20 -- # val= 00:06:20.407 10:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # IFS=: 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # read -r var val 00:06:20.407 10:52:16 -- accel/accel.sh@20 -- # val= 00:06:20.407 10:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # IFS=: 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # read -r var val 00:06:20.407 10:52:16 -- accel/accel.sh@20 -- # val= 00:06:20.407 10:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # IFS=: 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # read -r var val 00:06:20.407 10:52:16 -- accel/accel.sh@20 -- # val= 00:06:20.407 10:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # IFS=: 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # read -r var val 00:06:20.407 10:52:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.407 10:52:16 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:20.407 10:52:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.407 00:06:20.407 real 0m1.281s 00:06:20.407 user 0m1.186s 00:06:20.407 sys 0m0.106s 00:06:20.407 10:52:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.407 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:06:20.407 ************************************ 00:06:20.407 END TEST accel_copy 00:06:20.407 ************************************ 00:06:20.407 10:52:16 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.407 10:52:16 -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:20.407 10:52:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.407 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:06:20.407 ************************************ 00:06:20.407 START TEST accel_fill 00:06:20.407 ************************************ 00:06:20.407 10:52:16 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.407 10:52:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.407 10:52:16 -- accel/accel.sh@17 -- # local accel_module 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # IFS=: 00:06:20.407 10:52:16 -- accel/accel.sh@19 -- # read -r var val 00:06:20.407 10:52:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.407 10:52:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.407 10:52:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.407 10:52:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.407 10:52:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.407 10:52:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.407 10:52:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.407 10:52:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.407 10:52:16 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.407 10:52:16 -- accel/accel.sh@41 -- # jq -r . 00:06:20.407 [2024-05-15 10:52:16.906939] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:20.407 [2024-05-15 10:52:16.907009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147538 ] 00:06:20.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.407 [2024-05-15 10:52:16.968852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.407 [2024-05-15 10:52:17.040586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.667 10:52:17 -- accel/accel.sh@20 -- # val= 00:06:20.667 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.667 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.667 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.667 10:52:17 -- accel/accel.sh@20 -- # val= 00:06:20.667 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.667 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.667 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.667 10:52:17 -- accel/accel.sh@20 -- # val=0x1 00:06:20.667 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.667 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.667 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val= 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val= 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val=fill 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val=0x80 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val= 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val=software 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val=64 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val=64 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val=1 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val=Yes 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val= 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:20.668 10:52:17 -- accel/accel.sh@20 -- # val= 00:06:20.668 10:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # IFS=: 00:06:20.668 10:52:17 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.610 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.610 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.610 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.610 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.610 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.610 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.610 10:52:18 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:21.610 10:52:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.610 00:06:21.610 real 0m1.290s 00:06:21.610 user 0m1.200s 00:06:21.610 sys 0m0.102s 00:06:21.610 10:52:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.610 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:21.610 ************************************ 00:06:21.610 END TEST accel_fill 00:06:21.610 ************************************ 00:06:21.610 10:52:18 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:21.610 10:52:18 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:21.610 10:52:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.610 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:21.610 ************************************ 00:06:21.610 START TEST accel_copy_crc32c 00:06:21.610 ************************************ 00:06:21.610 10:52:18 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:21.610 10:52:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.610 10:52:18 -- accel/accel.sh@17 -- # local accel_module 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.610 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.610 10:52:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.610 10:52:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.610 10:52:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.610 10:52:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.610 10:52:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.610 10:52:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.610 10:52:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.610 10:52:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.610 10:52:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.610 10:52:18 -- accel/accel.sh@41 -- # jq -r . 00:06:21.872 [2024-05-15 10:52:18.278181] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:21.872 [2024-05-15 10:52:18.278295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147730 ] 00:06:21.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.872 [2024-05-15 10:52:18.351628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.872 [2024-05-15 10:52:18.422986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=0x1 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=0 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=software 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=32 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=32 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=1 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val=Yes 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:21.872 10:52:18 -- accel/accel.sh@20 -- # val= 00:06:21.872 10:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # IFS=: 00:06:21.872 10:52:18 -- accel/accel.sh@19 -- # read -r var val 00:06:23.259 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.259 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.260 10:52:19 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:23.260 10:52:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.260 00:06:23.260 real 0m1.303s 00:06:23.260 user 0m1.201s 00:06:23.260 sys 0m0.113s 00:06:23.260 10:52:19 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.260 10:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:23.260 ************************************ 00:06:23.260 END TEST accel_copy_crc32c 00:06:23.260 ************************************ 00:06:23.260 10:52:19 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.260 10:52:19 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:23.260 10:52:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.260 10:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:23.260 ************************************ 00:06:23.260 START TEST accel_copy_crc32c_C2 00:06:23.260 ************************************ 00:06:23.260 10:52:19 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.260 10:52:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.260 10:52:19 -- accel/accel.sh@17 -- # local accel_module 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:23.260 10:52:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:23.260 10:52:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.260 10:52:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.260 10:52:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.260 10:52:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.260 10:52:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.260 10:52:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.260 10:52:19 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.260 10:52:19 -- accel/accel.sh@41 -- # jq -r . 00:06:23.260 [2024-05-15 10:52:19.660386] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:23.260 [2024-05-15 10:52:19.660444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148057 ] 00:06:23.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.260 [2024-05-15 10:52:19.720428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.260 [2024-05-15 10:52:19.786049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=0x1 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=0 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=software 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=32 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=32 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=1 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val=Yes 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:23.260 10:52:19 -- accel/accel.sh@20 -- # val= 00:06:23.260 10:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # IFS=: 00:06:23.260 10:52:19 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.645 10:52:20 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.645 10:52:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.645 00:06:24.645 real 0m1.283s 00:06:24.645 user 0m1.197s 00:06:24.645 sys 0m0.098s 00:06:24.645 10:52:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.645 10:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.645 ************************************ 00:06:24.645 END TEST accel_copy_crc32c_C2 00:06:24.645 ************************************ 00:06:24.645 10:52:20 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:24.645 10:52:20 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:24.645 10:52:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.645 10:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.645 ************************************ 00:06:24.645 START TEST accel_dualcast 00:06:24.645 ************************************ 00:06:24.645 10:52:20 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:24.645 10:52:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.645 10:52:20 -- accel/accel.sh@17 -- # local accel_module 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:20 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:24.645 10:52:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.645 10:52:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.645 10:52:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.645 10:52:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.645 10:52:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.645 10:52:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.645 10:52:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.645 10:52:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.645 10:52:20 -- accel/accel.sh@41 -- # jq -r . 00:06:24.645 [2024-05-15 10:52:21.022219] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:24.645 [2024-05-15 10:52:21.022336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148404 ] 00:06:24.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.645 [2024-05-15 10:52:21.092473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.645 [2024-05-15 10:52:21.158344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.645 10:52:21 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:21 -- accel/accel.sh@20 -- # val= 00:06:24.645 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.645 10:52:21 -- accel/accel.sh@20 -- # val=0x1 00:06:24.645 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.645 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.645 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val= 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val= 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val=dualcast 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val= 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val=software 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val=32 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val=32 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val=1 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val=Yes 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val= 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:24.646 10:52:21 -- accel/accel.sh@20 -- # val= 00:06:24.646 10:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # IFS=: 00:06:24.646 10:52:21 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.033 10:52:22 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:26.033 10:52:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.033 00:06:26.033 real 0m1.293s 00:06:26.033 user 0m1.200s 00:06:26.033 sys 0m0.103s 00:06:26.033 10:52:22 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.033 10:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:26.033 ************************************ 00:06:26.033 END TEST accel_dualcast 00:06:26.033 ************************************ 00:06:26.033 10:52:22 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:26.033 10:52:22 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:26.033 10:52:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.033 10:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:26.033 ************************************ 00:06:26.033 START TEST accel_compare 00:06:26.033 ************************************ 00:06:26.033 10:52:22 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:26.033 10:52:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.033 10:52:22 -- accel/accel.sh@17 -- # local accel_module 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:26.033 10:52:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:26.033 10:52:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.033 10:52:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.033 10:52:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.033 10:52:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.033 10:52:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.033 10:52:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.033 10:52:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.033 10:52:22 -- accel/accel.sh@41 -- # jq -r . 00:06:26.033 [2024-05-15 10:52:22.396919] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:26.033 [2024-05-15 10:52:22.397008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148756 ] 00:06:26.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.033 [2024-05-15 10:52:22.458941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.033 [2024-05-15 10:52:22.523238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.033 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.033 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.033 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val=0x1 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val=compare 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val=software 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val=32 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val=32 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val=1 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val=Yes 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:26.034 10:52:22 -- accel/accel.sh@20 -- # val= 00:06:26.034 10:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # IFS=: 00:06:26.034 10:52:22 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.420 10:52:23 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:27.420 10:52:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.420 00:06:27.420 real 0m1.284s 00:06:27.420 user 0m1.196s 00:06:27.420 sys 0m0.098s 00:06:27.420 10:52:23 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.420 10:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:27.420 ************************************ 00:06:27.420 END TEST accel_compare 00:06:27.420 ************************************ 00:06:27.420 10:52:23 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:27.420 10:52:23 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:27.420 10:52:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.420 10:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:27.420 ************************************ 00:06:27.420 START TEST accel_xor 00:06:27.420 ************************************ 00:06:27.420 10:52:23 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:27.420 10:52:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.420 10:52:23 -- accel/accel.sh@17 -- # local accel_module 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:27.420 10:52:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:27.420 10:52:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.420 10:52:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.420 10:52:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.420 10:52:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.420 10:52:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.420 10:52:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.420 10:52:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.420 10:52:23 -- accel/accel.sh@41 -- # jq -r . 00:06:27.420 [2024-05-15 10:52:23.757907] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:27.420 [2024-05-15 10:52:23.757971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149062 ] 00:06:27.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.420 [2024-05-15 10:52:23.818278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.420 [2024-05-15 10:52:23.882262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val=0x1 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val=xor 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val=2 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val=software 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val=32 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val=32 00:06:27.420 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.420 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.420 10:52:23 -- accel/accel.sh@20 -- # val=1 00:06:27.421 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.421 10:52:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.421 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.421 10:52:23 -- accel/accel.sh@20 -- # val=Yes 00:06:27.421 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.421 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.421 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:27.421 10:52:23 -- accel/accel.sh@20 -- # val= 00:06:27.421 10:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # IFS=: 00:06:27.421 10:52:23 -- accel/accel.sh@19 -- # read -r var val 00:06:28.362 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.362 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.362 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.362 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.362 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.362 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.362 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.362 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.362 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.362 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.362 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.363 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.363 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.363 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.363 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.363 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.363 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.363 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.363 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.363 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.363 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.363 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.363 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.363 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.363 10:52:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.363 10:52:25 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:28.363 10:52:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.363 00:06:28.363 real 0m1.280s 00:06:28.363 user 0m1.193s 00:06:28.363 sys 0m0.098s 00:06:28.363 10:52:25 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.363 10:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:28.363 ************************************ 00:06:28.363 END TEST accel_xor 00:06:28.363 ************************************ 00:06:28.624 10:52:25 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:28.624 10:52:25 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:28.624 10:52:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.624 10:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:28.624 ************************************ 00:06:28.624 START TEST accel_xor 00:06:28.624 ************************************ 00:06:28.624 10:52:25 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:28.624 10:52:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.624 10:52:25 -- accel/accel.sh@17 -- # local accel_module 00:06:28.624 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.624 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.624 10:52:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:28.624 10:52:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:28.624 10:52:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.624 10:52:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.624 10:52:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.624 10:52:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.624 10:52:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.624 10:52:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.624 10:52:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.624 10:52:25 -- accel/accel.sh@41 -- # jq -r . 00:06:28.624 [2024-05-15 10:52:25.117481] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:28.624 [2024-05-15 10:52:25.117540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149240 ] 00:06:28.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.624 [2024-05-15 10:52:25.180127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.624 [2024-05-15 10:52:25.250525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.886 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.886 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.886 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.886 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.886 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.886 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.886 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.886 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=0x1 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=xor 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=3 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=software 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=32 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=32 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=1 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val=Yes 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:28.887 10:52:25 -- accel/accel.sh@20 -- # val= 00:06:28.887 10:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # IFS=: 00:06:28.887 10:52:25 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:29.832 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:29.832 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:29.832 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:29.832 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:29.832 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:29.832 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.832 10:52:26 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:29.832 10:52:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.832 00:06:29.832 real 0m1.290s 00:06:29.832 user 0m1.202s 00:06:29.832 sys 0m0.099s 00:06:29.832 10:52:26 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.832 10:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:29.832 ************************************ 00:06:29.832 END TEST accel_xor 00:06:29.832 ************************************ 00:06:29.832 10:52:26 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:29.832 10:52:26 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:29.832 10:52:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.832 10:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:29.832 ************************************ 00:06:29.832 START TEST accel_dif_verify 00:06:29.832 ************************************ 00:06:29.832 10:52:26 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:29.832 10:52:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.832 10:52:26 -- accel/accel.sh@17 -- # local accel_module 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:29.832 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:29.832 10:52:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:29.832 10:52:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:29.832 10:52:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.832 10:52:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.832 10:52:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.832 10:52:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.832 10:52:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.832 10:52:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.832 10:52:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.832 10:52:26 -- accel/accel.sh@41 -- # jq -r . 00:06:29.832 [2024-05-15 10:52:26.484595] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:29.832 [2024-05-15 10:52:26.484652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149493 ] 00:06:30.094 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.094 [2024-05-15 10:52:26.545681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.094 [2024-05-15 10:52:26.612651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val=0x1 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val=dif_verify 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.094 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:30.094 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.094 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val=software 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val=32 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val=32 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val=1 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val=No 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:30.095 10:52:26 -- accel/accel.sh@20 -- # val= 00:06:30.095 10:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # IFS=: 00:06:30.095 10:52:26 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.482 10:52:27 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:31.482 10:52:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.482 00:06:31.482 real 0m1.285s 00:06:31.482 user 0m1.198s 00:06:31.482 sys 0m0.100s 00:06:31.482 10:52:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.482 10:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:31.482 ************************************ 00:06:31.482 END TEST accel_dif_verify 00:06:31.482 ************************************ 00:06:31.482 10:52:27 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:31.482 10:52:27 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:31.482 10:52:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.482 10:52:27 -- common/autotest_common.sh@10 -- # set +x 00:06:31.482 ************************************ 00:06:31.482 START TEST accel_dif_generate 00:06:31.482 ************************************ 00:06:31.482 10:52:27 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:31.482 10:52:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.482 10:52:27 -- accel/accel.sh@17 -- # local accel_module 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:27 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:31.482 10:52:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:31.482 10:52:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.482 10:52:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.482 10:52:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.482 10:52:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.482 10:52:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.482 10:52:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.482 10:52:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.482 10:52:27 -- accel/accel.sh@41 -- # jq -r . 00:06:31.482 [2024-05-15 10:52:27.851101] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:31.482 [2024-05-15 10:52:27.851181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149851 ] 00:06:31.482 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.482 [2024-05-15 10:52:27.913905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.482 [2024-05-15 10:52:27.978977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val=0x1 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val=dif_generate 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val= 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.482 10:52:28 -- accel/accel.sh@20 -- # val=software 00:06:31.482 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.482 10:52:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.482 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.483 10:52:28 -- accel/accel.sh@20 -- # val=32 00:06:31.483 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.483 10:52:28 -- accel/accel.sh@20 -- # val=32 00:06:31.483 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.483 10:52:28 -- accel/accel.sh@20 -- # val=1 00:06:31.483 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.483 10:52:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.483 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.483 10:52:28 -- accel/accel.sh@20 -- # val=No 00:06:31.483 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.483 10:52:28 -- accel/accel.sh@20 -- # val= 00:06:31.483 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:31.483 10:52:28 -- accel/accel.sh@20 -- # val= 00:06:31.483 10:52:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # IFS=: 00:06:31.483 10:52:28 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.871 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.872 10:52:29 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:32.872 10:52:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.872 00:06:32.872 real 0m1.285s 00:06:32.872 user 0m1.201s 00:06:32.872 sys 0m0.095s 00:06:32.872 10:52:29 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.872 10:52:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.872 ************************************ 00:06:32.872 END TEST accel_dif_generate 00:06:32.872 ************************************ 00:06:32.872 10:52:29 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:32.872 10:52:29 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:32.872 10:52:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.872 10:52:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.872 ************************************ 00:06:32.872 START TEST accel_dif_generate_copy 00:06:32.872 ************************************ 00:06:32.872 10:52:29 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:32.872 10:52:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.872 10:52:29 -- accel/accel.sh@17 -- # local accel_module 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:32.872 10:52:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:32.872 10:52:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.872 10:52:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.872 10:52:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.872 10:52:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.872 10:52:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.872 10:52:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.872 10:52:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.872 10:52:29 -- accel/accel.sh@41 -- # jq -r . 00:06:32.872 [2024-05-15 10:52:29.215106] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:32.872 [2024-05-15 10:52:29.215194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150198 ] 00:06:32.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.872 [2024-05-15 10:52:29.276525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.872 [2024-05-15 10:52:29.342012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val=0x1 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val=software 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val=32 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val=32 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val=1 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val=No 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:32.872 10:52:29 -- accel/accel.sh@20 -- # val= 00:06:32.872 10:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # IFS=: 00:06:32.872 10:52:29 -- accel/accel.sh@19 -- # read -r var val 00:06:33.815 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:33.815 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.815 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:33.815 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:33.815 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:33.815 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.815 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:33.815 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:33.815 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.076 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.077 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.077 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.077 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.077 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.077 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.077 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.077 10:52:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.077 10:52:30 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:34.077 10:52:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.077 00:06:34.077 real 0m1.285s 00:06:34.077 user 0m1.193s 00:06:34.077 sys 0m0.102s 00:06:34.077 10:52:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.077 10:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:34.077 ************************************ 00:06:34.077 END TEST accel_dif_generate_copy 00:06:34.077 ************************************ 00:06:34.077 10:52:30 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:34.077 10:52:30 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.077 10:52:30 -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:34.077 10:52:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.077 10:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:34.077 ************************************ 00:06:34.077 START TEST accel_comp 00:06:34.077 ************************************ 00:06:34.077 10:52:30 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.077 10:52:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.077 10:52:30 -- accel/accel.sh@17 -- # local accel_module 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.077 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.077 10:52:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.077 10:52:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.077 10:52:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.077 10:52:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.077 10:52:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.077 10:52:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.077 10:52:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.077 10:52:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.077 10:52:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.077 10:52:30 -- accel/accel.sh@41 -- # jq -r . 00:06:34.077 [2024-05-15 10:52:30.580154] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:34.077 [2024-05-15 10:52:30.580236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150547 ] 00:06:34.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.077 [2024-05-15 10:52:30.641711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.077 [2024-05-15 10:52:30.706742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val=0x1 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val=compress 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val=software 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val=32 00:06:34.338 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.338 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.338 10:52:30 -- accel/accel.sh@20 -- # val=32 00:06:34.339 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.339 10:52:30 -- accel/accel.sh@20 -- # val=1 00:06:34.339 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.339 10:52:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.339 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.339 10:52:30 -- accel/accel.sh@20 -- # val=No 00:06:34.339 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.339 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.339 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:34.339 10:52:30 -- accel/accel.sh@20 -- # val= 00:06:34.339 10:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # IFS=: 00:06:34.339 10:52:30 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@20 -- # val= 00:06:35.282 10:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # IFS=: 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@20 -- # val= 00:06:35.282 10:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # IFS=: 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@20 -- # val= 00:06:35.282 10:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # IFS=: 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@20 -- # val= 00:06:35.282 10:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # IFS=: 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@20 -- # val= 00:06:35.282 10:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # IFS=: 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@20 -- # val= 00:06:35.282 10:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # IFS=: 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.282 10:52:31 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:35.282 10:52:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.282 00:06:35.282 real 0m1.285s 00:06:35.282 user 0m1.191s 00:06:35.282 sys 0m0.106s 00:06:35.282 10:52:31 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.282 10:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:35.282 ************************************ 00:06:35.282 END TEST accel_comp 00:06:35.282 ************************************ 00:06:35.282 10:52:31 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.282 10:52:31 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:35.282 10:52:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.282 10:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:35.282 ************************************ 00:06:35.282 START TEST accel_decomp 00:06:35.282 ************************************ 00:06:35.282 10:52:31 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.282 10:52:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.282 10:52:31 -- accel/accel.sh@17 -- # local accel_module 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # IFS=: 00:06:35.282 10:52:31 -- accel/accel.sh@19 -- # read -r var val 00:06:35.282 10:52:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.282 10:52:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.282 10:52:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.282 10:52:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.282 10:52:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.282 10:52:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.282 10:52:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.282 10:52:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.282 10:52:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.282 10:52:31 -- accel/accel.sh@41 -- # jq -r . 00:06:35.544 [2024-05-15 10:52:31.947575] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:35.544 [2024-05-15 10:52:31.947668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150746 ] 00:06:35.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.544 [2024-05-15 10:52:32.011808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.544 [2024-05-15 10:52:32.082865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=0x1 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=decompress 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=software 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=32 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=32 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=1 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val=Yes 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:35.544 10:52:32 -- accel/accel.sh@20 -- # val= 00:06:35.544 10:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # IFS=: 00:06:35.544 10:52:32 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.929 10:52:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.929 10:52:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.929 00:06:36.929 real 0m1.295s 00:06:36.929 user 0m1.209s 00:06:36.929 sys 0m0.098s 00:06:36.929 10:52:33 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.929 10:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:36.929 ************************************ 00:06:36.929 END TEST accel_decomp 00:06:36.929 ************************************ 00:06:36.929 10:52:33 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.929 10:52:33 -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:36.929 10:52:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.929 10:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:36.929 ************************************ 00:06:36.929 START TEST accel_decmop_full 00:06:36.929 ************************************ 00:06:36.929 10:52:33 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.929 10:52:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.929 10:52:33 -- accel/accel.sh@17 -- # local accel_module 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.929 10:52:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.929 10:52:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.929 10:52:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.929 10:52:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.929 10:52:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.929 10:52:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.929 10:52:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.929 10:52:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.929 10:52:33 -- accel/accel.sh@41 -- # jq -r . 00:06:36.929 [2024-05-15 10:52:33.321460] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:36.929 [2024-05-15 10:52:33.321520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150952 ] 00:06:36.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.929 [2024-05-15 10:52:33.384495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.929 [2024-05-15 10:52:33.454966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=0x1 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=decompress 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=software 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=32 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=32 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=1 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val=Yes 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:36.929 10:52:33 -- accel/accel.sh@20 -- # val= 00:06:36.929 10:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # IFS=: 00:06:36.929 10:52:33 -- accel/accel.sh@19 -- # read -r var val 00:06:38.314 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.314 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.314 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.314 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.314 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.314 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.314 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.314 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.314 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.314 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.314 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.314 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.314 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.314 10:52:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.314 10:52:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.314 10:52:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.314 00:06:38.314 real 0m1.308s 00:06:38.314 user 0m1.218s 00:06:38.314 sys 0m0.101s 00:06:38.314 10:52:34 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.314 10:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.314 ************************************ 00:06:38.314 END TEST accel_decmop_full 00:06:38.314 ************************************ 00:06:38.314 10:52:34 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.314 10:52:34 -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:38.314 10:52:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.314 10:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.314 ************************************ 00:06:38.314 START TEST accel_decomp_mcore 00:06:38.314 ************************************ 00:06:38.315 10:52:34 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.315 10:52:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.315 10:52:34 -- accel/accel.sh@17 -- # local accel_module 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.315 10:52:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.315 10:52:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.315 10:52:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.315 10:52:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.315 10:52:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.315 10:52:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.315 10:52:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.315 10:52:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.315 10:52:34 -- accel/accel.sh@41 -- # jq -r . 00:06:38.315 [2024-05-15 10:52:34.711898] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:38.315 [2024-05-15 10:52:34.711964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151287 ] 00:06:38.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.315 [2024-05-15 10:52:34.774908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.315 [2024-05-15 10:52:34.843620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.315 [2024-05-15 10:52:34.843860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.315 [2024-05-15 10:52:34.844020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.315 [2024-05-15 10:52:34.844021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=0xf 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=decompress 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=software 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=32 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=32 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=1 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val=Yes 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:38.315 10:52:34 -- accel/accel.sh@20 -- # val= 00:06:38.315 10:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # IFS=: 00:06:38.315 10:52:34 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:35 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.704 10:52:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.704 10:52:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.704 00:06:39.704 real 0m1.298s 00:06:39.704 user 0m4.438s 00:06:39.704 sys 0m0.105s 00:06:39.704 10:52:35 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.704 10:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:39.704 ************************************ 00:06:39.704 END TEST accel_decomp_mcore 00:06:39.704 ************************************ 00:06:39.704 10:52:36 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.704 10:52:36 -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:39.704 10:52:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.704 10:52:36 -- common/autotest_common.sh@10 -- # set +x 00:06:39.704 ************************************ 00:06:39.704 START TEST accel_decomp_full_mcore 00:06:39.704 ************************************ 00:06:39.704 10:52:36 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.704 10:52:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.704 10:52:36 -- accel/accel.sh@17 -- # local accel_module 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.704 10:52:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.704 10:52:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.704 10:52:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.704 10:52:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.704 10:52:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.704 10:52:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.704 10:52:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.704 10:52:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.704 10:52:36 -- accel/accel.sh@41 -- # jq -r . 00:06:39.704 [2024-05-15 10:52:36.091793] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:39.704 [2024-05-15 10:52:36.091890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151643 ] 00:06:39.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.704 [2024-05-15 10:52:36.153474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.704 [2024-05-15 10:52:36.220465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.704 [2024-05-15 10:52:36.220606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.704 [2024-05-15 10:52:36.220666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.704 [2024-05-15 10:52:36.220666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val=0xf 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val=decompress 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val=software 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.704 10:52:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.704 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.704 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.705 10:52:36 -- accel/accel.sh@20 -- # val=32 00:06:39.705 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.705 10:52:36 -- accel/accel.sh@20 -- # val=32 00:06:39.705 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.705 10:52:36 -- accel/accel.sh@20 -- # val=1 00:06:39.705 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.705 10:52:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.705 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.705 10:52:36 -- accel/accel.sh@20 -- # val=Yes 00:06:39.705 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.705 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.705 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:39.705 10:52:36 -- accel/accel.sh@20 -- # val= 00:06:39.705 10:52:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # IFS=: 00:06:39.705 10:52:36 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.092 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.092 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.092 10:52:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.092 10:52:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.092 10:52:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.092 00:06:41.092 real 0m1.310s 00:06:41.092 user 0m4.491s 00:06:41.092 sys 0m0.106s 00:06:41.092 10:52:37 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.092 10:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 ************************************ 00:06:41.092 END TEST accel_decomp_full_mcore 00:06:41.092 ************************************ 00:06:41.092 10:52:37 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.092 10:52:37 -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:41.092 10:52:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.092 10:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 ************************************ 00:06:41.093 START TEST accel_decomp_mthread 00:06:41.093 ************************************ 00:06:41.093 10:52:37 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.093 10:52:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.093 10:52:37 -- accel/accel.sh@17 -- # local accel_module 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.093 10:52:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.093 10:52:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.093 10:52:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.093 10:52:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.093 10:52:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.093 10:52:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.093 10:52:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.093 10:52:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.093 10:52:37 -- accel/accel.sh@41 -- # jq -r . 00:06:41.093 [2024-05-15 10:52:37.482014] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:41.093 [2024-05-15 10:52:37.482076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151998 ] 00:06:41.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.093 [2024-05-15 10:52:37.542165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.093 [2024-05-15 10:52:37.604543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=0x1 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=decompress 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=software 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=32 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=32 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=2 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val=Yes 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:41.093 10:52:37 -- accel/accel.sh@20 -- # val= 00:06:41.093 10:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # IFS=: 00:06:41.093 10:52:37 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.480 10:52:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.480 10:52:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.480 00:06:42.480 real 0m1.286s 00:06:42.480 user 0m1.191s 00:06:42.480 sys 0m0.107s 00:06:42.480 10:52:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.480 10:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 ************************************ 00:06:42.480 END TEST accel_decomp_mthread 00:06:42.480 ************************************ 00:06:42.480 10:52:38 -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.480 10:52:38 -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:42.480 10:52:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.480 10:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 ************************************ 00:06:42.480 START TEST accel_decomp_full_mthread 00:06:42.480 ************************************ 00:06:42.480 10:52:38 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.480 10:52:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.480 10:52:38 -- accel/accel.sh@17 -- # local accel_module 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:38 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.480 10:52:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.480 10:52:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.480 10:52:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.480 10:52:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.480 10:52:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.480 10:52:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.480 10:52:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.480 10:52:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.480 10:52:38 -- accel/accel.sh@41 -- # jq -r . 00:06:42.480 [2024-05-15 10:52:38.848998] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:42.480 [2024-05-15 10:52:38.849083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152280 ] 00:06:42.480 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.480 [2024-05-15 10:52:38.913117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.480 [2024-05-15 10:52:38.984992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val=0x1 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val=decompress 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val=software 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val=32 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val=32 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.480 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.480 10:52:39 -- accel/accel.sh@20 -- # val=2 00:06:42.480 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.481 10:52:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.481 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.481 10:52:39 -- accel/accel.sh@20 -- # val=Yes 00:06:42.481 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.481 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.481 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:42.481 10:52:39 -- accel/accel.sh@20 -- # val= 00:06:42.481 10:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # IFS=: 00:06:42.481 10:52:39 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:52:40 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:52:40 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:52:40 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:52:40 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:52:40 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:52:40 -- accel/accel.sh@20 -- # val= 00:06:43.868 10:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # IFS=: 00:06:43.868 10:52:40 -- accel/accel.sh@19 -- # read -r var val 00:06:43.868 10:52:40 -- accel/accel.sh@20 -- # val= 00:06:43.869 10:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.869 10:52:40 -- accel/accel.sh@19 -- # IFS=: 00:06:43.869 10:52:40 -- accel/accel.sh@19 -- # read -r var val 00:06:43.869 10:52:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.869 10:52:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.869 10:52:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.869 00:06:43.869 real 0m1.328s 00:06:43.869 user 0m1.231s 00:06:43.869 sys 0m0.108s 00:06:43.869 10:52:40 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.869 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:43.869 ************************************ 00:06:43.869 END TEST accel_decomp_full_mthread 00:06:43.869 ************************************ 00:06:43.869 10:52:40 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:43.869 10:52:40 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.869 10:52:40 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:43.869 10:52:40 -- accel/accel.sh@137 -- # build_accel_config 00:06:43.869 10:52:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.869 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:43.869 10:52:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.869 10:52:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.869 10:52:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.869 10:52:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.869 10:52:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.869 10:52:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.869 10:52:40 -- accel/accel.sh@41 -- # jq -r . 00:06:43.869 ************************************ 00:06:43.869 START TEST accel_dif_functional_tests 00:06:43.869 ************************************ 00:06:43.869 10:52:40 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.869 [2024-05-15 10:52:40.278142] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:43.869 [2024-05-15 10:52:40.278188] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152522 ] 00:06:43.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.869 [2024-05-15 10:52:40.337039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.869 [2024-05-15 10:52:40.403247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.869 [2024-05-15 10:52:40.403361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.869 [2024-05-15 10:52:40.403364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.869 00:06:43.869 00:06:43.869 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.869 http://cunit.sourceforge.net/ 00:06:43.869 00:06:43.869 00:06:43.869 Suite: accel_dif 00:06:43.869 Test: verify: DIF generated, GUARD check ...passed 00:06:43.869 Test: verify: DIF generated, APPTAG check ...passed 00:06:43.869 Test: verify: DIF generated, REFTAG check ...passed 00:06:43.869 Test: verify: DIF not generated, GUARD check ...[2024-05-15 10:52:40.458626] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.869 passed 00:06:43.869 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 10:52:40.458671] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.869 passed 00:06:43.869 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 10:52:40.458692] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.869 passed 00:06:43.869 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:43.869 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 10:52:40.458740] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:43.869 passed 00:06:43.869 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:43.869 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:43.869 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:43.869 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 10:52:40.458849] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:43.869 passed 00:06:43.869 Test: verify copy: DIF generated, GUARD check ...passed 00:06:43.869 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:43.869 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:43.869 Test: verify copy: DIF not generated, GUARD check ...[2024-05-15 10:52:40.458968] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.869 passed 00:06:43.869 Test: verify copy: DIF not generated, APPTAG check ...[2024-05-15 10:52:40.458990] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.869 passed 00:06:43.869 Test: verify copy: DIF not generated, REFTAG check ...[2024-05-15 10:52:40.459011] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.869 passed 00:06:43.869 Test: generate copy: DIF generated, GUARD check ...passed 00:06:43.869 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:43.869 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:43.869 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:43.869 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:43.869 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:43.869 Test: generate copy: iovecs-len validate ...[2024-05-15 10:52:40.459199] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:43.869 passed 00:06:43.869 Test: generate copy: buffer alignment validate ...passed 00:06:43.869 00:06:43.869 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.869 suites 1 1 n/a 0 0 00:06:43.869 tests 26 26 26 0 0 00:06:43.869 asserts 115 115 115 0 n/a 00:06:43.869 00:06:43.869 Elapsed time = 0.002 seconds 00:06:44.130 00:06:44.130 real 0m0.343s 00:06:44.130 user 0m0.440s 00:06:44.130 sys 0m0.123s 00:06:44.130 10:52:40 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.130 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.130 ************************************ 00:06:44.130 END TEST accel_dif_functional_tests 00:06:44.130 ************************************ 00:06:44.130 00:06:44.130 real 0m30.145s 00:06:44.130 user 0m33.705s 00:06:44.130 sys 0m4.069s 00:06:44.130 10:52:40 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.130 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.130 ************************************ 00:06:44.130 END TEST accel 00:06:44.130 ************************************ 00:06:44.130 10:52:40 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:44.130 10:52:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.130 10:52:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.130 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.130 ************************************ 00:06:44.130 START TEST accel_rpc 00:06:44.130 ************************************ 00:06:44.130 10:52:40 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:44.393 * Looking for test storage... 00:06:44.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:44.393 10:52:40 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.393 10:52:40 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=152765 00:06:44.393 10:52:40 -- accel/accel_rpc.sh@15 -- # waitforlisten 152765 00:06:44.393 10:52:40 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:44.393 10:52:40 -- common/autotest_common.sh@827 -- # '[' -z 152765 ']' 00:06:44.393 10:52:40 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.393 10:52:40 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.393 10:52:40 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.393 10:52:40 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.393 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.393 [2024-05-15 10:52:40.863750] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:44.393 [2024-05-15 10:52:40.863823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152765 ] 00:06:44.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.393 [2024-05-15 10:52:40.928069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.393 [2024-05-15 10:52:41.002671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.966 10:52:41 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.966 10:52:41 -- common/autotest_common.sh@860 -- # return 0 00:06:44.966 10:52:41 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:44.966 10:52:41 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:44.966 10:52:41 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:44.966 10:52:41 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:44.966 10:52:41 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:44.966 10:52:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.966 10:52:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.966 10:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.228 ************************************ 00:06:45.228 START TEST accel_assign_opcode 00:06:45.228 ************************************ 00:06:45.228 10:52:41 -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:45.228 10:52:41 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:45.228 10:52:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.228 10:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.228 [2024-05-15 10:52:41.660647] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:45.228 10:52:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.228 10:52:41 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:45.228 10:52:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.228 10:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.228 [2024-05-15 10:52:41.668659] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:45.228 10:52:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.228 10:52:41 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:45.228 10:52:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.228 10:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.228 10:52:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.228 10:52:41 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:45.229 10:52:41 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:45.229 10:52:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.229 10:52:41 -- accel/accel_rpc.sh@42 -- # grep software 00:06:45.229 10:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.229 10:52:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.229 software 00:06:45.229 00:06:45.229 real 0m0.208s 00:06:45.229 user 0m0.050s 00:06:45.229 sys 0m0.009s 00:06:45.229 10:52:41 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.229 10:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.229 ************************************ 00:06:45.229 END TEST accel_assign_opcode 00:06:45.229 ************************************ 00:06:45.490 10:52:41 -- accel/accel_rpc.sh@55 -- # killprocess 152765 00:06:45.490 10:52:41 -- common/autotest_common.sh@946 -- # '[' -z 152765 ']' 00:06:45.490 10:52:41 -- common/autotest_common.sh@950 -- # kill -0 152765 00:06:45.490 10:52:41 -- common/autotest_common.sh@951 -- # uname 00:06:45.490 10:52:41 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.490 10:52:41 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 152765 00:06:45.490 10:52:41 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.490 10:52:41 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.490 10:52:41 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 152765' 00:06:45.490 killing process with pid 152765 00:06:45.490 10:52:41 -- common/autotest_common.sh@965 -- # kill 152765 00:06:45.490 10:52:41 -- common/autotest_common.sh@970 -- # wait 152765 00:06:45.783 00:06:45.783 real 0m1.467s 00:06:45.783 user 0m1.557s 00:06:45.783 sys 0m0.398s 00:06:45.783 10:52:42 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.783 10:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.783 ************************************ 00:06:45.783 END TEST accel_rpc 00:06:45.783 ************************************ 00:06:45.783 10:52:42 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.783 10:52:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.783 10:52:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.783 10:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.783 ************************************ 00:06:45.783 START TEST app_cmdline 00:06:45.783 ************************************ 00:06:45.783 10:52:42 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.783 * Looking for test storage... 00:06:45.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.783 10:52:42 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.783 10:52:42 -- app/cmdline.sh@17 -- # spdk_tgt_pid=153179 00:06:45.783 10:52:42 -- app/cmdline.sh@18 -- # waitforlisten 153179 00:06:45.783 10:52:42 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.783 10:52:42 -- common/autotest_common.sh@827 -- # '[' -z 153179 ']' 00:06:45.783 10:52:42 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.783 10:52:42 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.783 10:52:42 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.783 10:52:42 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.783 10:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.783 [2024-05-15 10:52:42.391595] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:06:45.783 [2024-05-15 10:52:42.391650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153179 ] 00:06:45.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.044 [2024-05-15 10:52:42.451716] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.044 [2024-05-15 10:52:42.515966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.619 10:52:43 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.619 10:52:43 -- common/autotest_common.sh@860 -- # return 0 00:06:46.619 10:52:43 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:46.879 { 00:06:46.879 "version": "SPDK v24.05-pre git sha1 7d4b19830", 00:06:46.879 "fields": { 00:06:46.879 "major": 24, 00:06:46.879 "minor": 5, 00:06:46.879 "patch": 0, 00:06:46.879 "suffix": "-pre", 00:06:46.879 "commit": "7d4b19830" 00:06:46.879 } 00:06:46.879 } 00:06:46.879 10:52:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.879 10:52:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.879 10:52:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.879 10:52:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.879 10:52:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.879 10:52:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.879 10:52:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.879 10:52:43 -- app/cmdline.sh@26 -- # sort 00:06:46.879 10:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 10:52:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.879 10:52:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.879 10:52:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.879 10:52:43 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.879 10:52:43 -- common/autotest_common.sh@648 -- # local es=0 00:06:46.879 10:52:43 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.879 10:52:43 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.879 10:52:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.879 10:52:43 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.879 10:52:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.879 10:52:43 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.879 10:52:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.879 10:52:43 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.879 10:52:43 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.879 10:52:43 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.879 request: 00:06:46.879 { 00:06:46.879 "method": "env_dpdk_get_mem_stats", 00:06:46.879 "req_id": 1 00:06:46.879 } 00:06:46.879 Got JSON-RPC error response 00:06:46.879 response: 00:06:46.879 { 00:06:46.879 "code": -32601, 00:06:46.879 "message": "Method not found" 00:06:46.879 } 00:06:47.140 10:52:43 -- common/autotest_common.sh@651 -- # es=1 00:06:47.140 10:52:43 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.141 10:52:43 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.141 10:52:43 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.141 10:52:43 -- app/cmdline.sh@1 -- # killprocess 153179 00:06:47.141 10:52:43 -- common/autotest_common.sh@946 -- # '[' -z 153179 ']' 00:06:47.141 10:52:43 -- common/autotest_common.sh@950 -- # kill -0 153179 00:06:47.141 10:52:43 -- common/autotest_common.sh@951 -- # uname 00:06:47.141 10:52:43 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.141 10:52:43 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153179 00:06:47.141 10:52:43 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.141 10:52:43 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.141 10:52:43 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153179' 00:06:47.141 killing process with pid 153179 00:06:47.141 10:52:43 -- common/autotest_common.sh@965 -- # kill 153179 00:06:47.141 10:52:43 -- common/autotest_common.sh@970 -- # wait 153179 00:06:47.401 00:06:47.401 real 0m1.566s 00:06:47.401 user 0m1.908s 00:06:47.401 sys 0m0.385s 00:06:47.401 10:52:43 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.401 10:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:47.401 ************************************ 00:06:47.401 END TEST app_cmdline 00:06:47.401 ************************************ 00:06:47.401 10:52:43 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.401 10:52:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.401 10:52:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.401 10:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:47.401 ************************************ 00:06:47.401 START TEST version 00:06:47.401 ************************************ 00:06:47.401 10:52:43 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.401 * Looking for test storage... 00:06:47.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.401 10:52:43 -- app/version.sh@17 -- # get_header_version major 00:06:47.401 10:52:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.401 10:52:43 -- app/version.sh@14 -- # cut -f2 00:06:47.401 10:52:43 -- app/version.sh@14 -- # tr -d '"' 00:06:47.401 10:52:43 -- app/version.sh@17 -- # major=24 00:06:47.401 10:52:43 -- app/version.sh@18 -- # get_header_version minor 00:06:47.401 10:52:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.401 10:52:43 -- app/version.sh@14 -- # cut -f2 00:06:47.401 10:52:43 -- app/version.sh@14 -- # tr -d '"' 00:06:47.401 10:52:44 -- app/version.sh@18 -- # minor=5 00:06:47.401 10:52:44 -- app/version.sh@19 -- # get_header_version patch 00:06:47.401 10:52:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.401 10:52:44 -- app/version.sh@14 -- # cut -f2 00:06:47.401 10:52:44 -- app/version.sh@14 -- # tr -d '"' 00:06:47.401 10:52:44 -- app/version.sh@19 -- # patch=0 00:06:47.401 10:52:44 -- app/version.sh@20 -- # get_header_version suffix 00:06:47.401 10:52:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.401 10:52:44 -- app/version.sh@14 -- # cut -f2 00:06:47.401 10:52:44 -- app/version.sh@14 -- # tr -d '"' 00:06:47.401 10:52:44 -- app/version.sh@20 -- # suffix=-pre 00:06:47.401 10:52:44 -- app/version.sh@22 -- # version=24.5 00:06:47.401 10:52:44 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.401 10:52:44 -- app/version.sh@28 -- # version=24.5rc0 00:06:47.401 10:52:44 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.401 10:52:44 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.662 10:52:44 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:47.662 10:52:44 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:47.662 00:06:47.662 real 0m0.176s 00:06:47.662 user 0m0.092s 00:06:47.662 sys 0m0.126s 00:06:47.662 10:52:44 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.662 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.662 ************************************ 00:06:47.662 END TEST version 00:06:47.662 ************************************ 00:06:47.662 10:52:44 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@194 -- # uname -s 00:06:47.662 10:52:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:47.662 10:52:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.662 10:52:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.662 10:52:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:47.662 10:52:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.662 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.662 10:52:44 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:47.662 10:52:44 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:47.662 10:52:44 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.662 10:52:44 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:47.662 10:52:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.662 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.662 ************************************ 00:06:47.662 START TEST nvmf_tcp 00:06:47.662 ************************************ 00:06:47.662 10:52:44 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.662 * Looking for test storage... 00:06:47.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.662 10:52:44 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.662 10:52:44 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.662 10:52:44 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.662 10:52:44 -- nvmf/common.sh@7 -- # uname -s 00:06:47.662 10:52:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.662 10:52:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.662 10:52:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.662 10:52:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.662 10:52:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.662 10:52:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.662 10:52:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.662 10:52:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.662 10:52:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.662 10:52:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.924 10:52:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:47.924 10:52:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:47.924 10:52:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.924 10:52:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.924 10:52:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.924 10:52:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.924 10:52:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.924 10:52:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.924 10:52:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.924 10:52:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.924 10:52:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- paths/export.sh@5 -- # export PATH 00:06:47.924 10:52:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- nvmf/common.sh@47 -- # : 0 00:06:47.924 10:52:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.924 10:52:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.924 10:52:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.924 10:52:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.924 10:52:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.924 10:52:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.924 10:52:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.924 10:52:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.924 10:52:44 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:47.924 10:52:44 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:47.924 10:52:44 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:47.924 10:52:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:47.924 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.924 10:52:44 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:47.924 10:52:44 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.924 10:52:44 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:47.924 10:52:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.924 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.924 ************************************ 00:06:47.924 START TEST nvmf_example 00:06:47.924 ************************************ 00:06:47.924 10:52:44 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.924 * Looking for test storage... 00:06:47.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.924 10:52:44 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.924 10:52:44 -- nvmf/common.sh@7 -- # uname -s 00:06:47.924 10:52:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.924 10:52:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.924 10:52:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.924 10:52:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.924 10:52:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.924 10:52:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.924 10:52:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.924 10:52:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.924 10:52:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.924 10:52:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.924 10:52:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:47.924 10:52:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:47.924 10:52:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.924 10:52:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.924 10:52:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.924 10:52:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.924 10:52:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.924 10:52:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.924 10:52:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.924 10:52:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.924 10:52:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- paths/export.sh@5 -- # export PATH 00:06:47.924 10:52:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.924 10:52:44 -- nvmf/common.sh@47 -- # : 0 00:06:47.924 10:52:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.924 10:52:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.924 10:52:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.924 10:52:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.924 10:52:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.924 10:52:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.924 10:52:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.924 10:52:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.924 10:52:44 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:47.924 10:52:44 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:47.924 10:52:44 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:47.924 10:52:44 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:47.924 10:52:44 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:47.924 10:52:44 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:47.924 10:52:44 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:47.924 10:52:44 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:47.924 10:52:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:47.924 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.924 10:52:44 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:47.924 10:52:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:47.924 10:52:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.924 10:52:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:47.924 10:52:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:47.924 10:52:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:47.924 10:52:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.924 10:52:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.924 10:52:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.924 10:52:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:47.924 10:52:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:47.924 10:52:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.924 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:54.518 10:52:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:54.518 10:52:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.518 10:52:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.518 10:52:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.518 10:52:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.518 10:52:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.518 10:52:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.518 10:52:50 -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.518 10:52:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.518 10:52:50 -- nvmf/common.sh@296 -- # e810=() 00:06:54.518 10:52:50 -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.518 10:52:50 -- nvmf/common.sh@297 -- # x722=() 00:06:54.518 10:52:50 -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.518 10:52:50 -- nvmf/common.sh@298 -- # mlx=() 00:06:54.518 10:52:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.518 10:52:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.518 10:52:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.518 10:52:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.518 10:52:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.518 10:52:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.518 10:52:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:54.518 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:54.518 10:52:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.518 10:52:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.518 10:52:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:54.518 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:54.519 10:52:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.519 10:52:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.519 10:52:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.519 10:52:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:54.519 10:52:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.519 10:52:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:54.519 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:54.519 10:52:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.519 10:52:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.519 10:52:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.519 10:52:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:54.519 10:52:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.519 10:52:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:54.519 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:54.519 10:52:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.519 10:52:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:54.519 10:52:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:54.519 10:52:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:54.519 10:52:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:54.519 10:52:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.519 10:52:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.519 10:52:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.519 10:52:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.519 10:52:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.519 10:52:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.519 10:52:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.519 10:52:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.519 10:52:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.519 10:52:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.519 10:52:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.519 10:52:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.519 10:52:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.519 10:52:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.519 10:52:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.519 10:52:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.519 10:52:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.781 10:52:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.781 10:52:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.781 10:52:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:06:54.781 00:06:54.781 --- 10.0.0.2 ping statistics --- 00:06:54.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.781 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:06:54.781 10:52:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:06:54.781 00:06:54.781 --- 10.0.0.1 ping statistics --- 00:06:54.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.781 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:06:54.781 10:52:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.781 10:52:51 -- nvmf/common.sh@411 -- # return 0 00:06:54.781 10:52:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:54.781 10:52:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.781 10:52:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:54.781 10:52:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:54.781 10:52:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.781 10:52:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:54.781 10:52:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:54.781 10:52:51 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:54.781 10:52:51 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:54.781 10:52:51 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:54.781 10:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:54.781 10:52:51 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:54.781 10:52:51 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:54.781 10:52:51 -- target/nvmf_example.sh@34 -- # nvmfpid=157277 00:06:54.781 10:52:51 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:54.781 10:52:51 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:54.781 10:52:51 -- target/nvmf_example.sh@36 -- # waitforlisten 157277 00:06:54.781 10:52:51 -- common/autotest_common.sh@827 -- # '[' -z 157277 ']' 00:06:54.781 10:52:51 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.781 10:52:51 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.781 10:52:51 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.781 10:52:51 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.781 10:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:55.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.612 10:52:52 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.612 10:52:52 -- common/autotest_common.sh@860 -- # return 0 00:06:55.612 10:52:52 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:55.612 10:52:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.612 10:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.612 10:52:52 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.612 10:52:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.612 10:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.612 10:52:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.613 10:52:52 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:55.613 10:52:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.613 10:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.873 10:52:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.873 10:52:52 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:55.873 10:52:52 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:55.873 10:52:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.873 10:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.873 10:52:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.873 10:52:52 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:55.873 10:52:52 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:55.873 10:52:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.873 10:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.873 10:52:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.873 10:52:52 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.873 10:52:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.873 10:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.873 10:52:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.873 10:52:52 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:55.873 10:52:52 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:55.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.113 Initializing NVMe Controllers 00:07:08.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:08.113 Initialization complete. Launching workers. 00:07:08.113 ======================================================== 00:07:08.113 Latency(us) 00:07:08.113 Device Information : IOPS MiB/s Average min max 00:07:08.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18535.20 72.40 3453.79 622.51 15978.25 00:07:08.113 ======================================================== 00:07:08.113 Total : 18535.20 72.40 3453.79 622.51 15978.25 00:07:08.113 00:07:08.113 10:53:02 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:08.113 10:53:02 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:08.113 10:53:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:08.113 10:53:02 -- nvmf/common.sh@117 -- # sync 00:07:08.113 10:53:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.113 10:53:02 -- nvmf/common.sh@120 -- # set +e 00:07:08.113 10:53:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.113 10:53:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.113 rmmod nvme_tcp 00:07:08.113 rmmod nvme_fabrics 00:07:08.113 rmmod nvme_keyring 00:07:08.113 10:53:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.113 10:53:02 -- nvmf/common.sh@124 -- # set -e 00:07:08.113 10:53:02 -- nvmf/common.sh@125 -- # return 0 00:07:08.113 10:53:02 -- nvmf/common.sh@478 -- # '[' -n 157277 ']' 00:07:08.113 10:53:02 -- nvmf/common.sh@479 -- # killprocess 157277 00:07:08.113 10:53:02 -- common/autotest_common.sh@946 -- # '[' -z 157277 ']' 00:07:08.113 10:53:02 -- common/autotest_common.sh@950 -- # kill -0 157277 00:07:08.113 10:53:02 -- common/autotest_common.sh@951 -- # uname 00:07:08.113 10:53:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.113 10:53:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 157277 00:07:08.113 10:53:02 -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:08.113 10:53:02 -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:08.113 10:53:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 157277' 00:07:08.113 killing process with pid 157277 00:07:08.113 10:53:02 -- common/autotest_common.sh@965 -- # kill 157277 00:07:08.113 10:53:02 -- common/autotest_common.sh@970 -- # wait 157277 00:07:08.113 nvmf threads initialize successfully 00:07:08.113 bdev subsystem init successfully 00:07:08.113 created a nvmf target service 00:07:08.113 create targets's poll groups done 00:07:08.113 all subsystems of target started 00:07:08.113 nvmf target is running 00:07:08.113 all subsystems of target stopped 00:07:08.113 destroy targets's poll groups done 00:07:08.113 destroyed the nvmf target service 00:07:08.113 bdev subsystem finish successfully 00:07:08.113 nvmf threads destroy successfully 00:07:08.113 10:53:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:08.113 10:53:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:08.113 10:53:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:08.113 10:53:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.113 10:53:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.113 10:53:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.113 10:53:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.113 10:53:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.377 10:53:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:08.377 10:53:04 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:08.377 10:53:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.377 10:53:04 -- common/autotest_common.sh@10 -- # set +x 00:07:08.377 00:07:08.377 real 0m20.590s 00:07:08.377 user 0m46.602s 00:07:08.377 sys 0m6.122s 00:07:08.377 10:53:04 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.377 10:53:04 -- common/autotest_common.sh@10 -- # set +x 00:07:08.377 ************************************ 00:07:08.377 END TEST nvmf_example 00:07:08.377 ************************************ 00:07:08.377 10:53:05 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:08.377 10:53:05 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:08.377 10:53:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.377 10:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:08.642 ************************************ 00:07:08.642 START TEST nvmf_filesystem 00:07:08.642 ************************************ 00:07:08.642 10:53:05 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:08.642 * Looking for test storage... 00:07:08.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.642 10:53:05 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:08.642 10:53:05 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:08.642 10:53:05 -- common/autotest_common.sh@34 -- # set -e 00:07:08.642 10:53:05 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:08.642 10:53:05 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:08.642 10:53:05 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:08.642 10:53:05 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:08.642 10:53:05 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:08.642 10:53:05 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:08.642 10:53:05 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:08.642 10:53:05 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:08.642 10:53:05 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:08.642 10:53:05 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:08.642 10:53:05 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:08.642 10:53:05 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:08.642 10:53:05 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:08.642 10:53:05 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:08.642 10:53:05 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:08.642 10:53:05 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:08.642 10:53:05 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:08.642 10:53:05 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:08.642 10:53:05 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:08.642 10:53:05 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:08.642 10:53:05 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:08.642 10:53:05 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:08.642 10:53:05 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:08.642 10:53:05 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:08.642 10:53:05 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:08.642 10:53:05 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:08.642 10:53:05 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:08.642 10:53:05 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:08.642 10:53:05 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:08.642 10:53:05 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:08.642 10:53:05 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:08.642 10:53:05 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:08.642 10:53:05 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:08.642 10:53:05 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:08.642 10:53:05 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:08.642 10:53:05 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:08.642 10:53:05 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:08.642 10:53:05 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:08.642 10:53:05 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:08.642 10:53:05 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:08.642 10:53:05 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:08.642 10:53:05 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:08.642 10:53:05 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:08.642 10:53:05 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:08.642 10:53:05 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:08.642 10:53:05 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:08.642 10:53:05 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:08.642 10:53:05 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:08.642 10:53:05 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:08.642 10:53:05 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:08.642 10:53:05 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:08.642 10:53:05 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:08.642 10:53:05 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:08.642 10:53:05 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:08.642 10:53:05 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:08.642 10:53:05 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:08.642 10:53:05 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:08.642 10:53:05 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:08.642 10:53:05 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:08.642 10:53:05 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:08.642 10:53:05 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:08.642 10:53:05 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:08.642 10:53:05 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:08.642 10:53:05 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:08.642 10:53:05 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:08.642 10:53:05 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:08.642 10:53:05 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:08.642 10:53:05 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:08.642 10:53:05 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:08.642 10:53:05 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:08.642 10:53:05 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:08.642 10:53:05 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:08.642 10:53:05 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:08.642 10:53:05 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:08.642 10:53:05 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:08.642 10:53:05 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:08.642 10:53:05 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:08.642 10:53:05 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:08.642 10:53:05 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:08.642 10:53:05 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:08.642 10:53:05 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:08.642 10:53:05 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:08.642 10:53:05 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:08.642 10:53:05 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:08.642 10:53:05 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:08.642 10:53:05 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:08.643 10:53:05 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.643 10:53:05 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:08.643 10:53:05 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:08.643 10:53:05 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:08.643 10:53:05 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:08.643 10:53:05 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:08.643 10:53:05 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:08.643 10:53:05 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:08.643 10:53:05 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:08.643 10:53:05 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:08.643 #define SPDK_CONFIG_H 00:07:08.643 #define SPDK_CONFIG_APPS 1 00:07:08.643 #define SPDK_CONFIG_ARCH native 00:07:08.643 #undef SPDK_CONFIG_ASAN 00:07:08.643 #undef SPDK_CONFIG_AVAHI 00:07:08.643 #undef SPDK_CONFIG_CET 00:07:08.643 #define SPDK_CONFIG_COVERAGE 1 00:07:08.643 #define SPDK_CONFIG_CROSS_PREFIX 00:07:08.643 #undef SPDK_CONFIG_CRYPTO 00:07:08.643 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:08.643 #undef SPDK_CONFIG_CUSTOMOCF 00:07:08.643 #undef SPDK_CONFIG_DAOS 00:07:08.643 #define SPDK_CONFIG_DAOS_DIR 00:07:08.643 #define SPDK_CONFIG_DEBUG 1 00:07:08.643 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:08.643 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:08.643 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:08.643 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:08.643 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:08.643 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:08.643 #define SPDK_CONFIG_EXAMPLES 1 00:07:08.643 #undef SPDK_CONFIG_FC 00:07:08.643 #define SPDK_CONFIG_FC_PATH 00:07:08.643 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:08.643 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:08.643 #undef SPDK_CONFIG_FUSE 00:07:08.643 #undef SPDK_CONFIG_FUZZER 00:07:08.643 #define SPDK_CONFIG_FUZZER_LIB 00:07:08.643 #undef SPDK_CONFIG_GOLANG 00:07:08.643 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:08.643 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:08.643 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:08.643 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:08.643 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:08.643 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:08.643 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:08.643 #define SPDK_CONFIG_IDXD 1 00:07:08.643 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:08.643 #undef SPDK_CONFIG_IPSEC_MB 00:07:08.643 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:08.643 #define SPDK_CONFIG_ISAL 1 00:07:08.643 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:08.643 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:08.643 #define SPDK_CONFIG_LIBDIR 00:07:08.643 #undef SPDK_CONFIG_LTO 00:07:08.643 #define SPDK_CONFIG_MAX_LCORES 00:07:08.643 #define SPDK_CONFIG_NVME_CUSE 1 00:07:08.643 #undef SPDK_CONFIG_OCF 00:07:08.643 #define SPDK_CONFIG_OCF_PATH 00:07:08.643 #define SPDK_CONFIG_OPENSSL_PATH 00:07:08.643 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:08.643 #define SPDK_CONFIG_PGO_DIR 00:07:08.643 #undef SPDK_CONFIG_PGO_USE 00:07:08.643 #define SPDK_CONFIG_PREFIX /usr/local 00:07:08.643 #undef SPDK_CONFIG_RAID5F 00:07:08.643 #undef SPDK_CONFIG_RBD 00:07:08.643 #define SPDK_CONFIG_RDMA 1 00:07:08.643 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:08.643 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:08.643 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:08.643 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:08.643 #define SPDK_CONFIG_SHARED 1 00:07:08.643 #undef SPDK_CONFIG_SMA 00:07:08.643 #define SPDK_CONFIG_TESTS 1 00:07:08.643 #undef SPDK_CONFIG_TSAN 00:07:08.643 #define SPDK_CONFIG_UBLK 1 00:07:08.643 #define SPDK_CONFIG_UBSAN 1 00:07:08.643 #undef SPDK_CONFIG_UNIT_TESTS 00:07:08.643 #undef SPDK_CONFIG_URING 00:07:08.643 #define SPDK_CONFIG_URING_PATH 00:07:08.643 #undef SPDK_CONFIG_URING_ZNS 00:07:08.643 #undef SPDK_CONFIG_USDT 00:07:08.643 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:08.643 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:08.643 #define SPDK_CONFIG_VFIO_USER 1 00:07:08.643 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:08.643 #define SPDK_CONFIG_VHOST 1 00:07:08.643 #define SPDK_CONFIG_VIRTIO 1 00:07:08.643 #undef SPDK_CONFIG_VTUNE 00:07:08.643 #define SPDK_CONFIG_VTUNE_DIR 00:07:08.643 #define SPDK_CONFIG_WERROR 1 00:07:08.643 #define SPDK_CONFIG_WPDK_DIR 00:07:08.643 #undef SPDK_CONFIG_XNVME 00:07:08.643 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:08.643 10:53:05 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:08.643 10:53:05 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.643 10:53:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.643 10:53:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.643 10:53:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.643 10:53:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.643 10:53:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.643 10:53:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.643 10:53:05 -- paths/export.sh@5 -- # export PATH 00:07:08.643 10:53:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.643 10:53:05 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:08.643 10:53:05 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:08.643 10:53:05 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:08.643 10:53:05 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:08.643 10:53:05 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:08.643 10:53:05 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:08.643 10:53:05 -- pm/common@64 -- # TEST_TAG=N/A 00:07:08.643 10:53:05 -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:08.643 10:53:05 -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:08.643 10:53:05 -- pm/common@68 -- # uname -s 00:07:08.643 10:53:05 -- pm/common@68 -- # PM_OS=Linux 00:07:08.643 10:53:05 -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:08.643 10:53:05 -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:08.643 10:53:05 -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:08.643 10:53:05 -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:08.643 10:53:05 -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:08.643 10:53:05 -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:08.643 10:53:05 -- pm/common@76 -- # SUDO[0]= 00:07:08.643 10:53:05 -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:08.643 10:53:05 -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:08.643 10:53:05 -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:08.643 10:53:05 -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:08.643 10:53:05 -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:08.643 10:53:05 -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:08.643 10:53:05 -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:08.643 10:53:05 -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:08.643 10:53:05 -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:08.643 10:53:05 -- common/autotest_common.sh@57 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:08.643 10:53:05 -- common/autotest_common.sh@61 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:08.643 10:53:05 -- common/autotest_common.sh@63 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:08.643 10:53:05 -- common/autotest_common.sh@65 -- # : 1 00:07:08.643 10:53:05 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:08.643 10:53:05 -- common/autotest_common.sh@67 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:08.643 10:53:05 -- common/autotest_common.sh@69 -- # : 00:07:08.643 10:53:05 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:08.643 10:53:05 -- common/autotest_common.sh@71 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:08.643 10:53:05 -- common/autotest_common.sh@73 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:08.643 10:53:05 -- common/autotest_common.sh@75 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:08.643 10:53:05 -- common/autotest_common.sh@77 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:08.643 10:53:05 -- common/autotest_common.sh@79 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:08.643 10:53:05 -- common/autotest_common.sh@81 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:08.643 10:53:05 -- common/autotest_common.sh@83 -- # : 0 00:07:08.643 10:53:05 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:08.643 10:53:05 -- common/autotest_common.sh@85 -- # : 1 00:07:08.644 10:53:05 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:08.644 10:53:05 -- common/autotest_common.sh@87 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:08.644 10:53:05 -- common/autotest_common.sh@89 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:08.644 10:53:05 -- common/autotest_common.sh@91 -- # : 1 00:07:08.644 10:53:05 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:08.644 10:53:05 -- common/autotest_common.sh@93 -- # : 1 00:07:08.644 10:53:05 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:08.644 10:53:05 -- common/autotest_common.sh@95 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:08.644 10:53:05 -- common/autotest_common.sh@97 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:08.644 10:53:05 -- common/autotest_common.sh@99 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:08.644 10:53:05 -- common/autotest_common.sh@101 -- # : tcp 00:07:08.644 10:53:05 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:08.644 10:53:05 -- common/autotest_common.sh@103 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:08.644 10:53:05 -- common/autotest_common.sh@105 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:08.644 10:53:05 -- common/autotest_common.sh@107 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:08.644 10:53:05 -- common/autotest_common.sh@109 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:08.644 10:53:05 -- common/autotest_common.sh@111 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:08.644 10:53:05 -- common/autotest_common.sh@113 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:08.644 10:53:05 -- common/autotest_common.sh@115 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:08.644 10:53:05 -- common/autotest_common.sh@117 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:08.644 10:53:05 -- common/autotest_common.sh@119 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:08.644 10:53:05 -- common/autotest_common.sh@121 -- # : 1 00:07:08.644 10:53:05 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:08.644 10:53:05 -- common/autotest_common.sh@123 -- # : 00:07:08.644 10:53:05 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:08.644 10:53:05 -- common/autotest_common.sh@125 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:08.644 10:53:05 -- common/autotest_common.sh@127 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:08.644 10:53:05 -- common/autotest_common.sh@129 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:08.644 10:53:05 -- common/autotest_common.sh@131 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:08.644 10:53:05 -- common/autotest_common.sh@133 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:08.644 10:53:05 -- common/autotest_common.sh@135 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:08.644 10:53:05 -- common/autotest_common.sh@137 -- # : 00:07:08.644 10:53:05 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:08.644 10:53:05 -- common/autotest_common.sh@139 -- # : true 00:07:08.644 10:53:05 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:08.644 10:53:05 -- common/autotest_common.sh@141 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:08.644 10:53:05 -- common/autotest_common.sh@143 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:08.644 10:53:05 -- common/autotest_common.sh@145 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:08.644 10:53:05 -- common/autotest_common.sh@147 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:08.644 10:53:05 -- common/autotest_common.sh@149 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:08.644 10:53:05 -- common/autotest_common.sh@151 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:08.644 10:53:05 -- common/autotest_common.sh@153 -- # : e810 00:07:08.644 10:53:05 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:08.644 10:53:05 -- common/autotest_common.sh@155 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:08.644 10:53:05 -- common/autotest_common.sh@157 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:08.644 10:53:05 -- common/autotest_common.sh@159 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:08.644 10:53:05 -- common/autotest_common.sh@161 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:08.644 10:53:05 -- common/autotest_common.sh@163 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:08.644 10:53:05 -- common/autotest_common.sh@166 -- # : 00:07:08.644 10:53:05 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:08.644 10:53:05 -- common/autotest_common.sh@168 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:08.644 10:53:05 -- common/autotest_common.sh@170 -- # : 0 00:07:08.644 10:53:05 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:08.644 10:53:05 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.644 10:53:05 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.644 10:53:05 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.644 10:53:05 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.644 10:53:05 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.644 10:53:05 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:08.644 10:53:05 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:08.644 10:53:05 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.644 10:53:05 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.644 10:53:05 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.644 10:53:05 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.644 10:53:05 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:08.644 10:53:05 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:08.644 10:53:05 -- common/autotest_common.sh@199 -- # cat 00:07:08.644 10:53:05 -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:08.644 10:53:05 -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.644 10:53:05 -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.644 10:53:05 -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.644 10:53:05 -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.644 10:53:05 -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:08.644 10:53:05 -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:08.644 10:53:05 -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:08.644 10:53:05 -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:08.644 10:53:05 -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:08.644 10:53:05 -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:08.644 10:53:05 -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.644 10:53:05 -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.644 10:53:05 -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.645 10:53:05 -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.645 10:53:05 -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:08.645 10:53:05 -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:08.645 10:53:05 -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.645 10:53:05 -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.645 10:53:05 -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:08.645 10:53:05 -- common/autotest_common.sh@262 -- # export valgrind= 00:07:08.645 10:53:05 -- common/autotest_common.sh@262 -- # valgrind= 00:07:08.645 10:53:05 -- common/autotest_common.sh@268 -- # uname -s 00:07:08.645 10:53:05 -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:08.645 10:53:05 -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:08.645 10:53:05 -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:08.645 10:53:05 -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:08.645 10:53:05 -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:08.645 10:53:05 -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:08.645 10:53:05 -- common/autotest_common.sh@278 -- # MAKE=make 00:07:08.645 10:53:05 -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j144 00:07:08.645 10:53:05 -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:08.645 10:53:05 -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:08.645 10:53:05 -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:08.645 10:53:05 -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:08.645 10:53:05 -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:08.645 10:53:05 -- common/autotest_common.sh@300 -- # case "$i" in 00:07:08.645 10:53:05 -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:08.645 10:53:05 -- common/autotest_common.sh@317 -- # [[ -z 160076 ]] 00:07:08.645 10:53:05 -- common/autotest_common.sh@317 -- # kill -0 160076 00:07:08.645 10:53:05 -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:08.645 10:53:05 -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:08.645 10:53:05 -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:08.645 10:53:05 -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:08.645 10:53:05 -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:08.645 10:53:05 -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:08.645 10:53:05 -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:08.645 10:53:05 -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:08.645 10:53:05 -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.sn3lbR 00:07:08.645 10:53:05 -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:08.645 10:53:05 -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:08.645 10:53:05 -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:08.645 10:53:05 -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.sn3lbR/tests/target /tmp/spdk.sn3lbR 00:07:08.645 10:53:05 -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:08.645 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.645 10:53:05 -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:08.645 10:53:05 -- common/autotest_common.sh@326 -- # df -T 00:07:08.645 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:08.645 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:08.645 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:08.645 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:08.645 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:08.645 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.645 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:08.645 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:08.645 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=967749632 00:07:08.645 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:08.645 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=4316680192 00:07:08.908 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=124068876288 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=129371017216 00:07:08.908 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=5302140928 00:07:08.908 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=64682131456 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685506560 00:07:08.908 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:08.908 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=25864515584 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=25874206720 00:07:08.908 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=9691136 00:07:08.908 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=234496 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:07:08.908 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=269312 00:07:08.908 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=64685338624 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685510656 00:07:08.908 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=172032 00:07:08.908 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # avails["$mount"]=12937097216 00:07:08.908 10:53:05 -- common/autotest_common.sh@361 -- # sizes["$mount"]=12937101312 00:07:08.908 10:53:05 -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:08.908 10:53:05 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:08.908 10:53:05 -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:08.908 * Looking for test storage... 00:07:08.908 10:53:05 -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:08.908 10:53:05 -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:08.908 10:53:05 -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.908 10:53:05 -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:08.908 10:53:05 -- common/autotest_common.sh@371 -- # mount=/ 00:07:08.908 10:53:05 -- common/autotest_common.sh@373 -- # target_space=124068876288 00:07:08.908 10:53:05 -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:08.908 10:53:05 -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:08.908 10:53:05 -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:08.908 10:53:05 -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:08.908 10:53:05 -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:08.908 10:53:05 -- common/autotest_common.sh@380 -- # new_size=7516733440 00:07:08.908 10:53:05 -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:08.908 10:53:05 -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.908 10:53:05 -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.908 10:53:05 -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.908 10:53:05 -- common/autotest_common.sh@388 -- # return 0 00:07:08.908 10:53:05 -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:08.908 10:53:05 -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:08.908 10:53:05 -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:08.908 10:53:05 -- common/autotest_common.sh@1682 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:08.908 10:53:05 -- common/autotest_common.sh@1683 -- # true 00:07:08.908 10:53:05 -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:08.908 10:53:05 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:08.908 10:53:05 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:08.908 10:53:05 -- common/autotest_common.sh@27 -- # exec 00:07:08.908 10:53:05 -- common/autotest_common.sh@29 -- # exec 00:07:08.908 10:53:05 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:08.908 10:53:05 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:08.908 10:53:05 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:08.908 10:53:05 -- common/autotest_common.sh@18 -- # set -x 00:07:08.908 10:53:05 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.908 10:53:05 -- nvmf/common.sh@7 -- # uname -s 00:07:08.908 10:53:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.908 10:53:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.908 10:53:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.908 10:53:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.908 10:53:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.908 10:53:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.908 10:53:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.908 10:53:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.908 10:53:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.908 10:53:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.908 10:53:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.908 10:53:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.908 10:53:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.908 10:53:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.908 10:53:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.908 10:53:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.908 10:53:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.908 10:53:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.908 10:53:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.908 10:53:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.908 10:53:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.908 10:53:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.908 10:53:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.908 10:53:05 -- paths/export.sh@5 -- # export PATH 00:07:08.908 10:53:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.908 10:53:05 -- nvmf/common.sh@47 -- # : 0 00:07:08.908 10:53:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.908 10:53:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.908 10:53:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.908 10:53:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.908 10:53:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.908 10:53:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.908 10:53:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.908 10:53:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.908 10:53:05 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:08.908 10:53:05 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:08.909 10:53:05 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:08.909 10:53:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:08.909 10:53:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.909 10:53:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:08.909 10:53:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:08.909 10:53:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:08.909 10:53:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.909 10:53:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.909 10:53:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.909 10:53:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:08.909 10:53:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:08.909 10:53:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.909 10:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:15.504 10:53:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:15.504 10:53:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.504 10:53:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.504 10:53:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.504 10:53:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.504 10:53:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.504 10:53:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.504 10:53:11 -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.504 10:53:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.504 10:53:11 -- nvmf/common.sh@296 -- # e810=() 00:07:15.504 10:53:11 -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.504 10:53:11 -- nvmf/common.sh@297 -- # x722=() 00:07:15.504 10:53:11 -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.504 10:53:11 -- nvmf/common.sh@298 -- # mlx=() 00:07:15.504 10:53:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.504 10:53:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.504 10:53:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.504 10:53:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.504 10:53:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.504 10:53:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.504 10:53:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:15.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:15.504 10:53:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.504 10:53:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:15.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:15.504 10:53:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.504 10:53:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.504 10:53:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.504 10:53:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:15.504 10:53:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.504 10:53:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:15.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:15.504 10:53:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.504 10:53:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.504 10:53:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.504 10:53:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:15.504 10:53:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.504 10:53:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:15.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:15.504 10:53:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.504 10:53:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:15.504 10:53:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:15.504 10:53:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:15.504 10:53:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:15.504 10:53:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.504 10:53:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.504 10:53:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.504 10:53:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.504 10:53:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.504 10:53:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.504 10:53:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.504 10:53:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.504 10:53:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.504 10:53:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.504 10:53:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.504 10:53:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.504 10:53:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.504 10:53:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.505 10:53:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.766 10:53:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.766 10:53:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.766 10:53:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.766 10:53:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.766 10:53:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:07:15.766 00:07:15.766 --- 10.0.0.2 ping statistics --- 00:07:15.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.766 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:07:15.766 10:53:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:07:15.766 00:07:15.766 --- 10.0.0.1 ping statistics --- 00:07:15.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.766 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:07:15.766 10:53:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.766 10:53:12 -- nvmf/common.sh@411 -- # return 0 00:07:15.767 10:53:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:15.767 10:53:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.767 10:53:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:15.767 10:53:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:15.767 10:53:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.767 10:53:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:15.767 10:53:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:15.767 10:53:12 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:15.767 10:53:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:15.767 10:53:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.767 10:53:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 ************************************ 00:07:15.767 START TEST nvmf_filesystem_no_in_capsule 00:07:15.767 ************************************ 00:07:15.767 10:53:12 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:15.767 10:53:12 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:15.767 10:53:12 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:15.767 10:53:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:15.767 10:53:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:15.767 10:53:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 10:53:12 -- nvmf/common.sh@470 -- # nvmfpid=163850 00:07:15.767 10:53:12 -- nvmf/common.sh@471 -- # waitforlisten 163850 00:07:15.767 10:53:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.767 10:53:12 -- common/autotest_common.sh@827 -- # '[' -z 163850 ']' 00:07:15.767 10:53:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.767 10:53:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.767 10:53:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.767 10:53:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.767 10:53:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.767 [2024-05-15 10:53:12.419711] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:07:15.767 [2024-05-15 10:53:12.419799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.027 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.027 [2024-05-15 10:53:12.492753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.027 [2024-05-15 10:53:12.569175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.027 [2024-05-15 10:53:12.569212] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.027 [2024-05-15 10:53:12.569220] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.027 [2024-05-15 10:53:12.569227] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.027 [2024-05-15 10:53:12.569233] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.027 [2024-05-15 10:53:12.569369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.027 [2024-05-15 10:53:12.569488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.027 [2024-05-15 10:53:12.569645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.027 [2024-05-15 10:53:12.569646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.598 10:53:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.598 10:53:13 -- common/autotest_common.sh@860 -- # return 0 00:07:16.598 10:53:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:16.598 10:53:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.598 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.599 10:53:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.599 10:53:13 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:16.599 10:53:13 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:16.599 10:53:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.599 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.599 [2024-05-15 10:53:13.242112] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.599 10:53:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.599 10:53:13 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:16.599 10:53:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.599 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.860 Malloc1 00:07:16.860 10:53:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.860 10:53:13 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:16.860 10:53:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.860 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.860 10:53:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.860 10:53:13 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.860 10:53:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.860 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.860 10:53:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.860 10:53:13 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.860 10:53:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.860 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.860 [2024-05-15 10:53:13.373577] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:16.861 [2024-05-15 10:53:13.373795] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.861 10:53:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.861 10:53:13 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:16.861 10:53:13 -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:16.861 10:53:13 -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:16.861 10:53:13 -- common/autotest_common.sh@1376 -- # local bs 00:07:16.861 10:53:13 -- common/autotest_common.sh@1377 -- # local nb 00:07:16.861 10:53:13 -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:16.861 10:53:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.861 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.861 10:53:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.861 10:53:13 -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:16.861 { 00:07:16.861 "name": "Malloc1", 00:07:16.861 "aliases": [ 00:07:16.861 "425c0205-e315-4062-bd18-f8d0d56e5b2d" 00:07:16.861 ], 00:07:16.861 "product_name": "Malloc disk", 00:07:16.861 "block_size": 512, 00:07:16.861 "num_blocks": 1048576, 00:07:16.861 "uuid": "425c0205-e315-4062-bd18-f8d0d56e5b2d", 00:07:16.861 "assigned_rate_limits": { 00:07:16.861 "rw_ios_per_sec": 0, 00:07:16.861 "rw_mbytes_per_sec": 0, 00:07:16.861 "r_mbytes_per_sec": 0, 00:07:16.861 "w_mbytes_per_sec": 0 00:07:16.861 }, 00:07:16.861 "claimed": true, 00:07:16.861 "claim_type": "exclusive_write", 00:07:16.861 "zoned": false, 00:07:16.861 "supported_io_types": { 00:07:16.861 "read": true, 00:07:16.861 "write": true, 00:07:16.861 "unmap": true, 00:07:16.861 "write_zeroes": true, 00:07:16.861 "flush": true, 00:07:16.861 "reset": true, 00:07:16.861 "compare": false, 00:07:16.861 "compare_and_write": false, 00:07:16.861 "abort": true, 00:07:16.861 "nvme_admin": false, 00:07:16.861 "nvme_io": false 00:07:16.861 }, 00:07:16.861 "memory_domains": [ 00:07:16.861 { 00:07:16.861 "dma_device_id": "system", 00:07:16.861 "dma_device_type": 1 00:07:16.861 }, 00:07:16.861 { 00:07:16.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.861 "dma_device_type": 2 00:07:16.861 } 00:07:16.861 ], 00:07:16.861 "driver_specific": {} 00:07:16.861 } 00:07:16.861 ]' 00:07:16.861 10:53:13 -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:16.861 10:53:13 -- common/autotest_common.sh@1379 -- # bs=512 00:07:16.861 10:53:13 -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:16.861 10:53:13 -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:16.861 10:53:13 -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:16.861 10:53:13 -- common/autotest_common.sh@1384 -- # echo 512 00:07:16.861 10:53:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:16.861 10:53:13 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.777 10:53:15 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.777 10:53:15 -- common/autotest_common.sh@1194 -- # local i=0 00:07:18.777 10:53:15 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.777 10:53:15 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:18.777 10:53:15 -- common/autotest_common.sh@1201 -- # sleep 2 00:07:20.690 10:53:17 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:20.690 10:53:17 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:20.690 10:53:17 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.690 10:53:17 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:20.690 10:53:17 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.690 10:53:17 -- common/autotest_common.sh@1204 -- # return 0 00:07:20.690 10:53:17 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:20.690 10:53:17 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:20.690 10:53:17 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:20.690 10:53:17 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:20.690 10:53:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:20.690 10:53:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:20.690 10:53:17 -- setup/common.sh@80 -- # echo 536870912 00:07:20.690 10:53:17 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:20.690 10:53:17 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:20.690 10:53:17 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:20.690 10:53:17 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:20.690 10:53:17 -- target/filesystem.sh@69 -- # partprobe 00:07:20.690 10:53:17 -- target/filesystem.sh@70 -- # sleep 1 00:07:21.633 10:53:18 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:21.633 10:53:18 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:21.633 10:53:18 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:21.633 10:53:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.633 10:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 ************************************ 00:07:21.895 START TEST filesystem_ext4 00:07:21.895 ************************************ 00:07:21.895 10:53:18 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:21.895 10:53:18 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:21.895 10:53:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:21.895 10:53:18 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:21.895 10:53:18 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:21.895 10:53:18 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:21.895 10:53:18 -- common/autotest_common.sh@924 -- # local i=0 00:07:21.895 10:53:18 -- common/autotest_common.sh@925 -- # local force 00:07:21.895 10:53:18 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:21.895 10:53:18 -- common/autotest_common.sh@928 -- # force=-F 00:07:21.895 10:53:18 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:21.895 mke2fs 1.46.5 (30-Dec-2021) 00:07:21.895 Discarding device blocks: 0/522240 done 00:07:21.895 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:21.895 Filesystem UUID: bc89f1a1-a328-4842-851a-8dfa3efb3574 00:07:21.895 Superblock backups stored on blocks: 00:07:21.895 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:21.895 00:07:21.895 Allocating group tables: 0/64 done 00:07:21.895 Writing inode tables: 0/64 done 00:07:21.895 Creating journal (8192 blocks): done 00:07:21.895 Writing superblocks and filesystem accounting information: 0/64 done 00:07:21.895 00:07:21.895 10:53:18 -- common/autotest_common.sh@941 -- # return 0 00:07:21.895 10:53:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.156 10:53:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.156 10:53:18 -- target/filesystem.sh@25 -- # sync 00:07:22.156 10:53:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.156 10:53:18 -- target/filesystem.sh@27 -- # sync 00:07:22.156 10:53:18 -- target/filesystem.sh@29 -- # i=0 00:07:22.156 10:53:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:22.156 10:53:18 -- target/filesystem.sh@37 -- # kill -0 163850 00:07:22.156 10:53:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:22.156 10:53:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:22.156 10:53:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:22.156 10:53:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:22.418 00:07:22.418 real 0m0.486s 00:07:22.418 user 0m0.029s 00:07:22.418 sys 0m0.064s 00:07:22.418 10:53:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.418 10:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.418 ************************************ 00:07:22.418 END TEST filesystem_ext4 00:07:22.418 ************************************ 00:07:22.418 10:53:18 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:22.418 10:53:18 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:22.418 10:53:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.418 10:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.418 ************************************ 00:07:22.418 START TEST filesystem_btrfs 00:07:22.418 ************************************ 00:07:22.418 10:53:18 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:22.418 10:53:18 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:22.418 10:53:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.418 10:53:18 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:22.418 10:53:18 -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:22.418 10:53:18 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:22.418 10:53:18 -- common/autotest_common.sh@924 -- # local i=0 00:07:22.418 10:53:18 -- common/autotest_common.sh@925 -- # local force 00:07:22.418 10:53:18 -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:22.418 10:53:18 -- common/autotest_common.sh@930 -- # force=-f 00:07:22.418 10:53:18 -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:22.992 btrfs-progs v6.6.2 00:07:22.992 See https://btrfs.readthedocs.io for more information. 00:07:22.992 00:07:22.992 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:22.992 NOTE: several default settings have changed in version 5.15, please make sure 00:07:22.992 this does not affect your deployments: 00:07:22.992 - DUP for metadata (-m dup) 00:07:22.992 - enabled no-holes (-O no-holes) 00:07:22.992 - enabled free-space-tree (-R free-space-tree) 00:07:22.992 00:07:22.992 Label: (null) 00:07:22.992 UUID: 6188a95e-b973-4327-81f1-0cdfebcb2b84 00:07:22.992 Node size: 16384 00:07:22.992 Sector size: 4096 00:07:22.992 Filesystem size: 510.00MiB 00:07:22.992 Block group profiles: 00:07:22.992 Data: single 8.00MiB 00:07:22.992 Metadata: DUP 32.00MiB 00:07:22.992 System: DUP 8.00MiB 00:07:22.992 SSD detected: yes 00:07:22.992 Zoned device: no 00:07:22.992 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:22.992 Runtime features: free-space-tree 00:07:22.992 Checksum: crc32c 00:07:22.992 Number of devices: 1 00:07:22.992 Devices: 00:07:22.992 ID SIZE PATH 00:07:22.992 1 510.00MiB /dev/nvme0n1p1 00:07:22.992 00:07:22.992 10:53:19 -- common/autotest_common.sh@941 -- # return 0 00:07:22.992 10:53:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.254 10:53:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.254 10:53:19 -- target/filesystem.sh@25 -- # sync 00:07:23.254 10:53:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.254 10:53:19 -- target/filesystem.sh@27 -- # sync 00:07:23.254 10:53:19 -- target/filesystem.sh@29 -- # i=0 00:07:23.254 10:53:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.254 10:53:19 -- target/filesystem.sh@37 -- # kill -0 163850 00:07:23.254 10:53:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.254 10:53:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.254 10:53:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.254 10:53:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.254 00:07:23.254 real 0m0.825s 00:07:23.254 user 0m0.020s 00:07:23.254 sys 0m0.193s 00:07:23.254 10:53:19 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.254 10:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:23.254 ************************************ 00:07:23.254 END TEST filesystem_btrfs 00:07:23.254 ************************************ 00:07:23.254 10:53:19 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:23.254 10:53:19 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:23.254 10:53:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.254 10:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:23.254 ************************************ 00:07:23.254 START TEST filesystem_xfs 00:07:23.254 ************************************ 00:07:23.254 10:53:19 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:23.254 10:53:19 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:23.254 10:53:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.254 10:53:19 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:23.254 10:53:19 -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:23.254 10:53:19 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:23.254 10:53:19 -- common/autotest_common.sh@924 -- # local i=0 00:07:23.254 10:53:19 -- common/autotest_common.sh@925 -- # local force 00:07:23.254 10:53:19 -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:23.254 10:53:19 -- common/autotest_common.sh@930 -- # force=-f 00:07:23.254 10:53:19 -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:23.254 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:23.254 = sectsz=512 attr=2, projid32bit=1 00:07:23.254 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:23.254 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:23.254 data = bsize=4096 blocks=130560, imaxpct=25 00:07:23.254 = sunit=0 swidth=0 blks 00:07:23.254 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:23.254 log =internal log bsize=4096 blocks=16384, version=2 00:07:23.254 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:23.254 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:24.640 Discarding blocks...Done. 00:07:24.640 10:53:20 -- common/autotest_common.sh@941 -- # return 0 00:07:24.640 10:53:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.943 10:53:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.943 10:53:24 -- target/filesystem.sh@25 -- # sync 00:07:27.943 10:53:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.943 10:53:24 -- target/filesystem.sh@27 -- # sync 00:07:27.943 10:53:24 -- target/filesystem.sh@29 -- # i=0 00:07:27.943 10:53:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.943 10:53:24 -- target/filesystem.sh@37 -- # kill -0 163850 00:07:27.943 10:53:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.943 10:53:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.943 10:53:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.943 10:53:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.203 00:07:28.203 real 0m4.796s 00:07:28.203 user 0m0.018s 00:07:28.203 sys 0m0.123s 00:07:28.203 10:53:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.203 10:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:28.204 ************************************ 00:07:28.204 END TEST filesystem_xfs 00:07:28.204 ************************************ 00:07:28.204 10:53:24 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:28.464 10:53:24 -- target/filesystem.sh@93 -- # sync 00:07:28.464 10:53:24 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:28.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.464 10:53:25 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:28.464 10:53:25 -- common/autotest_common.sh@1215 -- # local i=0 00:07:28.464 10:53:25 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:28.464 10:53:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.464 10:53:25 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:28.464 10:53:25 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.464 10:53:25 -- common/autotest_common.sh@1227 -- # return 0 00:07:28.464 10:53:25 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.464 10:53:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.464 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.725 10:53:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.725 10:53:25 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:28.725 10:53:25 -- target/filesystem.sh@101 -- # killprocess 163850 00:07:28.725 10:53:25 -- common/autotest_common.sh@946 -- # '[' -z 163850 ']' 00:07:28.725 10:53:25 -- common/autotest_common.sh@950 -- # kill -0 163850 00:07:28.725 10:53:25 -- common/autotest_common.sh@951 -- # uname 00:07:28.725 10:53:25 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.725 10:53:25 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 163850 00:07:28.725 10:53:25 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.725 10:53:25 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.725 10:53:25 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 163850' 00:07:28.725 killing process with pid 163850 00:07:28.725 10:53:25 -- common/autotest_common.sh@965 -- # kill 163850 00:07:28.725 [2024-05-15 10:53:25.179668] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:28.725 10:53:25 -- common/autotest_common.sh@970 -- # wait 163850 00:07:28.986 10:53:25 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:28.986 00:07:28.986 real 0m13.052s 00:07:28.986 user 0m51.382s 00:07:28.986 sys 0m1.344s 00:07:28.986 10:53:25 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.986 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.986 ************************************ 00:07:28.986 END TEST nvmf_filesystem_no_in_capsule 00:07:28.986 ************************************ 00:07:28.986 10:53:25 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:28.986 10:53:25 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:28.986 10:53:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.986 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.986 ************************************ 00:07:28.986 START TEST nvmf_filesystem_in_capsule 00:07:28.986 ************************************ 00:07:28.986 10:53:25 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:28.986 10:53:25 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:28.986 10:53:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:28.986 10:53:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:28.986 10:53:25 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:28.986 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.986 10:53:25 -- nvmf/common.sh@470 -- # nvmfpid=166635 00:07:28.986 10:53:25 -- nvmf/common.sh@471 -- # waitforlisten 166635 00:07:28.986 10:53:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.986 10:53:25 -- common/autotest_common.sh@827 -- # '[' -z 166635 ']' 00:07:28.986 10:53:25 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.986 10:53:25 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.986 10:53:25 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.986 10:53:25 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.986 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.986 [2024-05-15 10:53:25.553052] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:07:28.986 [2024-05-15 10:53:25.553115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.986 [2024-05-15 10:53:25.618858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.247 [2024-05-15 10:53:25.686643] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.247 [2024-05-15 10:53:25.686676] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.247 [2024-05-15 10:53:25.686684] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.247 [2024-05-15 10:53:25.686691] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.247 [2024-05-15 10:53:25.686697] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.247 [2024-05-15 10:53:25.686847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.247 [2024-05-15 10:53:25.686976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.247 [2024-05-15 10:53:25.687137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.247 [2024-05-15 10:53:25.687137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.819 10:53:26 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.819 10:53:26 -- common/autotest_common.sh@860 -- # return 0 00:07:29.819 10:53:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:29.819 10:53:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.819 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.819 10:53:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.819 10:53:26 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:29.819 10:53:26 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:29.819 10:53:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.819 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.819 [2024-05-15 10:53:26.371147] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.819 10:53:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.819 10:53:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:29.819 10:53:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.819 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.819 Malloc1 00:07:29.819 10:53:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.819 10:53:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:29.819 10:53:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.819 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:30.080 10:53:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.080 10:53:26 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:30.080 10:53:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.080 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:30.080 10:53:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.080 10:53:26 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.080 10:53:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.080 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:30.080 [2024-05-15 10:53:26.496605] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:30.080 [2024-05-15 10:53:26.496841] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.080 10:53:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.080 10:53:26 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:30.080 10:53:26 -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:30.080 10:53:26 -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:30.080 10:53:26 -- common/autotest_common.sh@1376 -- # local bs 00:07:30.080 10:53:26 -- common/autotest_common.sh@1377 -- # local nb 00:07:30.080 10:53:26 -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:30.080 10:53:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.080 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:30.080 10:53:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.080 10:53:26 -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:30.080 { 00:07:30.080 "name": "Malloc1", 00:07:30.080 "aliases": [ 00:07:30.080 "980c4a91-0a03-4e0b-b198-06e2cd307032" 00:07:30.080 ], 00:07:30.080 "product_name": "Malloc disk", 00:07:30.080 "block_size": 512, 00:07:30.080 "num_blocks": 1048576, 00:07:30.080 "uuid": "980c4a91-0a03-4e0b-b198-06e2cd307032", 00:07:30.080 "assigned_rate_limits": { 00:07:30.080 "rw_ios_per_sec": 0, 00:07:30.080 "rw_mbytes_per_sec": 0, 00:07:30.080 "r_mbytes_per_sec": 0, 00:07:30.080 "w_mbytes_per_sec": 0 00:07:30.080 }, 00:07:30.080 "claimed": true, 00:07:30.080 "claim_type": "exclusive_write", 00:07:30.080 "zoned": false, 00:07:30.080 "supported_io_types": { 00:07:30.080 "read": true, 00:07:30.080 "write": true, 00:07:30.080 "unmap": true, 00:07:30.080 "write_zeroes": true, 00:07:30.080 "flush": true, 00:07:30.080 "reset": true, 00:07:30.080 "compare": false, 00:07:30.080 "compare_and_write": false, 00:07:30.080 "abort": true, 00:07:30.080 "nvme_admin": false, 00:07:30.080 "nvme_io": false 00:07:30.080 }, 00:07:30.080 "memory_domains": [ 00:07:30.080 { 00:07:30.080 "dma_device_id": "system", 00:07:30.080 "dma_device_type": 1 00:07:30.080 }, 00:07:30.080 { 00:07:30.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.080 "dma_device_type": 2 00:07:30.080 } 00:07:30.080 ], 00:07:30.080 "driver_specific": {} 00:07:30.080 } 00:07:30.080 ]' 00:07:30.080 10:53:26 -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:30.080 10:53:26 -- common/autotest_common.sh@1379 -- # bs=512 00:07:30.080 10:53:26 -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:30.080 10:53:26 -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:30.080 10:53:26 -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:30.080 10:53:26 -- common/autotest_common.sh@1384 -- # echo 512 00:07:30.080 10:53:26 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:30.080 10:53:26 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.467 10:53:28 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:31.467 10:53:28 -- common/autotest_common.sh@1194 -- # local i=0 00:07:31.467 10:53:28 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:31.467 10:53:28 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:31.467 10:53:28 -- common/autotest_common.sh@1201 -- # sleep 2 00:07:34.016 10:53:30 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:34.016 10:53:30 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:34.016 10:53:30 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.016 10:53:30 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:34.016 10:53:30 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.016 10:53:30 -- common/autotest_common.sh@1204 -- # return 0 00:07:34.016 10:53:30 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.016 10:53:30 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.016 10:53:30 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.016 10:53:30 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.016 10:53:30 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.016 10:53:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.016 10:53:30 -- setup/common.sh@80 -- # echo 536870912 00:07:34.016 10:53:30 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.016 10:53:30 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.016 10:53:30 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.016 10:53:30 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.016 10:53:30 -- target/filesystem.sh@69 -- # partprobe 00:07:34.277 10:53:30 -- target/filesystem.sh@70 -- # sleep 1 00:07:35.220 10:53:31 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:35.220 10:53:31 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:35.220 10:53:31 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:35.220 10:53:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.220 10:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:35.482 ************************************ 00:07:35.482 START TEST filesystem_in_capsule_ext4 00:07:35.482 ************************************ 00:07:35.482 10:53:31 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:35.482 10:53:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:35.482 10:53:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.482 10:53:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:35.482 10:53:31 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:35.482 10:53:31 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:35.482 10:53:31 -- common/autotest_common.sh@924 -- # local i=0 00:07:35.482 10:53:31 -- common/autotest_common.sh@925 -- # local force 00:07:35.482 10:53:31 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:35.482 10:53:31 -- common/autotest_common.sh@928 -- # force=-F 00:07:35.482 10:53:31 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:35.482 mke2fs 1.46.5 (30-Dec-2021) 00:07:35.482 Discarding device blocks: 0/522240 done 00:07:35.482 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:35.482 Filesystem UUID: f1a30a35-8b22-446c-ae97-454b94af4ce9 00:07:35.482 Superblock backups stored on blocks: 00:07:35.482 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:35.482 00:07:35.482 Allocating group tables: 0/64 done 00:07:35.482 Writing inode tables: 0/64 done 00:07:36.423 Creating journal (8192 blocks): done 00:07:36.423 Writing superblocks and filesystem accounting information: 0/64 done 00:07:36.423 00:07:36.423 10:53:33 -- common/autotest_common.sh@941 -- # return 0 00:07:36.423 10:53:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.366 10:53:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.366 10:53:33 -- target/filesystem.sh@25 -- # sync 00:07:37.366 10:53:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.366 10:53:33 -- target/filesystem.sh@27 -- # sync 00:07:37.366 10:53:33 -- target/filesystem.sh@29 -- # i=0 00:07:37.366 10:53:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.366 10:53:33 -- target/filesystem.sh@37 -- # kill -0 166635 00:07:37.366 10:53:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.366 10:53:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.366 10:53:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.366 10:53:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.366 00:07:37.366 real 0m2.038s 00:07:37.366 user 0m0.026s 00:07:37.366 sys 0m0.072s 00:07:37.366 10:53:33 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.366 10:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:37.366 ************************************ 00:07:37.366 END TEST filesystem_in_capsule_ext4 00:07:37.366 ************************************ 00:07:37.366 10:53:33 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.366 10:53:33 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:37.366 10:53:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.366 10:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:37.366 ************************************ 00:07:37.366 START TEST filesystem_in_capsule_btrfs 00:07:37.366 ************************************ 00:07:37.366 10:53:33 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.366 10:53:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.366 10:53:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.366 10:53:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.366 10:53:33 -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:37.366 10:53:33 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:37.366 10:53:33 -- common/autotest_common.sh@924 -- # local i=0 00:07:37.366 10:53:33 -- common/autotest_common.sh@925 -- # local force 00:07:37.366 10:53:33 -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:37.366 10:53:33 -- common/autotest_common.sh@930 -- # force=-f 00:07:37.366 10:53:33 -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:37.938 btrfs-progs v6.6.2 00:07:37.938 See https://btrfs.readthedocs.io for more information. 00:07:37.938 00:07:37.938 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:37.938 NOTE: several default settings have changed in version 5.15, please make sure 00:07:37.938 this does not affect your deployments: 00:07:37.938 - DUP for metadata (-m dup) 00:07:37.938 - enabled no-holes (-O no-holes) 00:07:37.938 - enabled free-space-tree (-R free-space-tree) 00:07:37.938 00:07:37.938 Label: (null) 00:07:37.938 UUID: e80daeaa-86f8-452e-9f31-d2444e90c840 00:07:37.938 Node size: 16384 00:07:37.938 Sector size: 4096 00:07:37.938 Filesystem size: 510.00MiB 00:07:37.938 Block group profiles: 00:07:37.938 Data: single 8.00MiB 00:07:37.938 Metadata: DUP 32.00MiB 00:07:37.938 System: DUP 8.00MiB 00:07:37.938 SSD detected: yes 00:07:37.938 Zoned device: no 00:07:37.938 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:37.938 Runtime features: free-space-tree 00:07:37.938 Checksum: crc32c 00:07:37.938 Number of devices: 1 00:07:37.938 Devices: 00:07:37.938 ID SIZE PATH 00:07:37.938 1 510.00MiB /dev/nvme0n1p1 00:07:37.938 00:07:37.938 10:53:34 -- common/autotest_common.sh@941 -- # return 0 00:07:37.938 10:53:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.199 10:53:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.199 10:53:34 -- target/filesystem.sh@25 -- # sync 00:07:38.200 10:53:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.200 10:53:34 -- target/filesystem.sh@27 -- # sync 00:07:38.200 10:53:34 -- target/filesystem.sh@29 -- # i=0 00:07:38.200 10:53:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.200 10:53:34 -- target/filesystem.sh@37 -- # kill -0 166635 00:07:38.200 10:53:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.200 10:53:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.200 10:53:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.200 10:53:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.200 00:07:38.200 real 0m0.853s 00:07:38.200 user 0m0.039s 00:07:38.200 sys 0m0.120s 00:07:38.200 10:53:34 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.200 10:53:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.200 ************************************ 00:07:38.200 END TEST filesystem_in_capsule_btrfs 00:07:38.200 ************************************ 00:07:38.461 10:53:34 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.461 10:53:34 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:38.461 10:53:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.461 10:53:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.461 ************************************ 00:07:38.461 START TEST filesystem_in_capsule_xfs 00:07:38.461 ************************************ 00:07:38.461 10:53:34 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.461 10:53:34 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.461 10:53:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.461 10:53:34 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.461 10:53:34 -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:38.461 10:53:34 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:38.461 10:53:34 -- common/autotest_common.sh@924 -- # local i=0 00:07:38.461 10:53:34 -- common/autotest_common.sh@925 -- # local force 00:07:38.461 10:53:34 -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:38.461 10:53:34 -- common/autotest_common.sh@930 -- # force=-f 00:07:38.461 10:53:34 -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.461 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.461 = sectsz=512 attr=2, projid32bit=1 00:07:38.461 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.461 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.461 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.462 = sunit=0 swidth=0 blks 00:07:38.462 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.462 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.462 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.462 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.405 Discarding blocks...Done. 00:07:39.405 10:53:36 -- common/autotest_common.sh@941 -- # return 0 00:07:39.405 10:53:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.321 10:53:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.321 10:53:37 -- target/filesystem.sh@25 -- # sync 00:07:41.321 10:53:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.321 10:53:37 -- target/filesystem.sh@27 -- # sync 00:07:41.321 10:53:37 -- target/filesystem.sh@29 -- # i=0 00:07:41.321 10:53:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.321 10:53:37 -- target/filesystem.sh@37 -- # kill -0 166635 00:07:41.321 10:53:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.321 10:53:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.321 10:53:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.321 10:53:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.321 00:07:41.321 real 0m3.004s 00:07:41.321 user 0m0.021s 00:07:41.321 sys 0m0.082s 00:07:41.321 10:53:37 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.321 10:53:37 -- common/autotest_common.sh@10 -- # set +x 00:07:41.321 ************************************ 00:07:41.321 END TEST filesystem_in_capsule_xfs 00:07:41.321 ************************************ 00:07:41.582 10:53:37 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:41.843 10:53:38 -- target/filesystem.sh@93 -- # sync 00:07:41.843 10:53:38 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.843 10:53:38 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.843 10:53:38 -- common/autotest_common.sh@1215 -- # local i=0 00:07:41.843 10:53:38 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:41.843 10:53:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.843 10:53:38 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:41.843 10:53:38 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.843 10:53:38 -- common/autotest_common.sh@1227 -- # return 0 00:07:41.843 10:53:38 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.843 10:53:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.843 10:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:41.843 10:53:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.843 10:53:38 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:41.843 10:53:38 -- target/filesystem.sh@101 -- # killprocess 166635 00:07:41.843 10:53:38 -- common/autotest_common.sh@946 -- # '[' -z 166635 ']' 00:07:41.843 10:53:38 -- common/autotest_common.sh@950 -- # kill -0 166635 00:07:41.843 10:53:38 -- common/autotest_common.sh@951 -- # uname 00:07:41.843 10:53:38 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:41.843 10:53:38 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 166635 00:07:41.843 10:53:38 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:41.843 10:53:38 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:41.843 10:53:38 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 166635' 00:07:41.843 killing process with pid 166635 00:07:41.843 10:53:38 -- common/autotest_common.sh@965 -- # kill 166635 00:07:41.843 [2024-05-15 10:53:38.450782] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:41.843 10:53:38 -- common/autotest_common.sh@970 -- # wait 166635 00:07:42.104 10:53:38 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.104 00:07:42.104 real 0m13.186s 00:07:42.104 user 0m52.008s 00:07:42.104 sys 0m1.218s 00:07:42.104 10:53:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.104 10:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:42.104 ************************************ 00:07:42.104 END TEST nvmf_filesystem_in_capsule 00:07:42.104 ************************************ 00:07:42.104 10:53:38 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:42.104 10:53:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:42.104 10:53:38 -- nvmf/common.sh@117 -- # sync 00:07:42.104 10:53:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.104 10:53:38 -- nvmf/common.sh@120 -- # set +e 00:07:42.104 10:53:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.104 10:53:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.104 rmmod nvme_tcp 00:07:42.104 rmmod nvme_fabrics 00:07:42.365 rmmod nvme_keyring 00:07:42.365 10:53:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.365 10:53:38 -- nvmf/common.sh@124 -- # set -e 00:07:42.365 10:53:38 -- nvmf/common.sh@125 -- # return 0 00:07:42.365 10:53:38 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:42.365 10:53:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:42.365 10:53:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:42.365 10:53:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:42.365 10:53:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.365 10:53:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.365 10:53:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.365 10:53:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.365 10:53:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.279 10:53:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.279 00:07:44.279 real 0m35.796s 00:07:44.279 user 1m45.617s 00:07:44.279 sys 0m7.813s 00:07:44.279 10:53:40 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.279 10:53:40 -- common/autotest_common.sh@10 -- # set +x 00:07:44.279 ************************************ 00:07:44.279 END TEST nvmf_filesystem 00:07:44.279 ************************************ 00:07:44.279 10:53:40 -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:44.279 10:53:40 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:44.279 10:53:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.279 10:53:40 -- common/autotest_common.sh@10 -- # set +x 00:07:44.279 ************************************ 00:07:44.279 START TEST nvmf_target_discovery 00:07:44.279 ************************************ 00:07:44.279 10:53:40 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:44.540 * Looking for test storage... 00:07:44.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.540 10:53:41 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.540 10:53:41 -- nvmf/common.sh@7 -- # uname -s 00:07:44.540 10:53:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.540 10:53:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.540 10:53:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.540 10:53:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.540 10:53:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.540 10:53:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.540 10:53:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.540 10:53:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.540 10:53:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.540 10:53:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.540 10:53:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.540 10:53:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.540 10:53:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.540 10:53:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.540 10:53:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.540 10:53:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.540 10:53:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.540 10:53:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.540 10:53:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.540 10:53:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.540 10:53:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.540 10:53:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.540 10:53:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.540 10:53:41 -- paths/export.sh@5 -- # export PATH 00:07:44.540 10:53:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.540 10:53:41 -- nvmf/common.sh@47 -- # : 0 00:07:44.540 10:53:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.540 10:53:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.540 10:53:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.540 10:53:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.540 10:53:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.540 10:53:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.540 10:53:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.540 10:53:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.540 10:53:41 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:44.540 10:53:41 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:44.540 10:53:41 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:44.540 10:53:41 -- target/discovery.sh@15 -- # hash nvme 00:07:44.540 10:53:41 -- target/discovery.sh@20 -- # nvmftestinit 00:07:44.540 10:53:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:44.540 10:53:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.540 10:53:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:44.540 10:53:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:44.540 10:53:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:44.540 10:53:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.540 10:53:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.540 10:53:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.540 10:53:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:44.540 10:53:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:44.540 10:53:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.540 10:53:41 -- common/autotest_common.sh@10 -- # set +x 00:07:51.131 10:53:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:51.131 10:53:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.131 10:53:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.131 10:53:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.131 10:53:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.131 10:53:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.131 10:53:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.131 10:53:47 -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.131 10:53:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.131 10:53:47 -- nvmf/common.sh@296 -- # e810=() 00:07:51.131 10:53:47 -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.131 10:53:47 -- nvmf/common.sh@297 -- # x722=() 00:07:51.131 10:53:47 -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.131 10:53:47 -- nvmf/common.sh@298 -- # mlx=() 00:07:51.131 10:53:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.131 10:53:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.131 10:53:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.131 10:53:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.131 10:53:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.131 10:53:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.131 10:53:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:51.131 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:51.131 10:53:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.131 10:53:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.393 10:53:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:51.393 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:51.393 10:53:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.393 10:53:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.393 10:53:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.393 10:53:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:51.393 10:53:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.393 10:53:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:51.393 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:51.393 10:53:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.393 10:53:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.393 10:53:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.393 10:53:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:51.393 10:53:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.393 10:53:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:51.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:51.393 10:53:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.393 10:53:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:51.393 10:53:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:51.393 10:53:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:51.393 10:53:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:51.393 10:53:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.393 10:53:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.393 10:53:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.393 10:53:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.393 10:53:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.393 10:53:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.393 10:53:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.393 10:53:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.393 10:53:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.393 10:53:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.393 10:53:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.393 10:53:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.393 10:53:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.393 10:53:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.393 10:53:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.393 10:53:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.393 10:53:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.654 10:53:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.654 10:53:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.654 10:53:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:07:51.654 00:07:51.654 --- 10.0.0.2 ping statistics --- 00:07:51.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.654 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:07:51.654 10:53:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:07:51.654 00:07:51.654 --- 10.0.0.1 ping statistics --- 00:07:51.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.654 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:51.654 10:53:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.654 10:53:48 -- nvmf/common.sh@411 -- # return 0 00:07:51.654 10:53:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:51.654 10:53:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.654 10:53:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:51.654 10:53:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:51.654 10:53:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.654 10:53:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:51.654 10:53:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:51.654 10:53:48 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:51.654 10:53:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:51.654 10:53:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:51.654 10:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:51.654 10:53:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.654 10:53:48 -- nvmf/common.sh@470 -- # nvmfpid=173747 00:07:51.654 10:53:48 -- nvmf/common.sh@471 -- # waitforlisten 173747 00:07:51.654 10:53:48 -- common/autotest_common.sh@827 -- # '[' -z 173747 ']' 00:07:51.654 10:53:48 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.654 10:53:48 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.654 10:53:48 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.654 10:53:48 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.654 10:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:51.654 [2024-05-15 10:53:48.170526] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:07:51.654 [2024-05-15 10:53:48.170598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.654 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.654 [2024-05-15 10:53:48.238881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.654 [2024-05-15 10:53:48.304670] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.654 [2024-05-15 10:53:48.304702] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.654 [2024-05-15 10:53:48.304710] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.654 [2024-05-15 10:53:48.304717] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.654 [2024-05-15 10:53:48.304723] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.654 [2024-05-15 10:53:48.304863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.654 [2024-05-15 10:53:48.304993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.654 [2024-05-15 10:53:48.305149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.654 [2024-05-15 10:53:48.305150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.596 10:53:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.596 10:53:48 -- common/autotest_common.sh@860 -- # return 0 00:07:52.596 10:53:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:52.596 10:53:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.596 10:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.596 10:53:48 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.596 10:53:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 [2024-05-15 10:53:49.006279] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@26 -- # seq 1 4 00:07:52.596 10:53:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.596 10:53:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 Null1 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 [2024-05-15 10:53:49.066423] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:52.596 [2024-05-15 10:53:49.066626] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.596 10:53:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 Null2 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.596 10:53:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 Null3 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.596 10:53:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:52.596 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.596 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.596 Null4 00:07:52.596 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.596 10:53:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:52.597 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.597 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.597 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.597 10:53:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:52.597 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.597 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.597 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.597 10:53:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:52.597 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.597 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.597 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.597 10:53:49 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.597 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.597 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.597 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.597 10:53:49 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.597 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.597 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.597 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.597 10:53:49 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:07:52.858 00:07:52.858 Discovery Log Number of Records 6, Generation counter 6 00:07:52.858 =====Discovery Log Entry 0====== 00:07:52.858 trtype: tcp 00:07:52.858 adrfam: ipv4 00:07:52.858 subtype: current discovery subsystem 00:07:52.858 treq: not required 00:07:52.858 portid: 0 00:07:52.858 trsvcid: 4420 00:07:52.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.858 traddr: 10.0.0.2 00:07:52.858 eflags: explicit discovery connections, duplicate discovery information 00:07:52.858 sectype: none 00:07:52.858 =====Discovery Log Entry 1====== 00:07:52.858 trtype: tcp 00:07:52.858 adrfam: ipv4 00:07:52.858 subtype: nvme subsystem 00:07:52.858 treq: not required 00:07:52.858 portid: 0 00:07:52.858 trsvcid: 4420 00:07:52.858 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:52.858 traddr: 10.0.0.2 00:07:52.858 eflags: none 00:07:52.858 sectype: none 00:07:52.858 =====Discovery Log Entry 2====== 00:07:52.858 trtype: tcp 00:07:52.858 adrfam: ipv4 00:07:52.858 subtype: nvme subsystem 00:07:52.858 treq: not required 00:07:52.858 portid: 0 00:07:52.858 trsvcid: 4420 00:07:52.858 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:52.858 traddr: 10.0.0.2 00:07:52.858 eflags: none 00:07:52.858 sectype: none 00:07:52.858 =====Discovery Log Entry 3====== 00:07:52.858 trtype: tcp 00:07:52.858 adrfam: ipv4 00:07:52.858 subtype: nvme subsystem 00:07:52.858 treq: not required 00:07:52.858 portid: 0 00:07:52.858 trsvcid: 4420 00:07:52.858 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:52.858 traddr: 10.0.0.2 00:07:52.858 eflags: none 00:07:52.858 sectype: none 00:07:52.858 =====Discovery Log Entry 4====== 00:07:52.858 trtype: tcp 00:07:52.858 adrfam: ipv4 00:07:52.858 subtype: nvme subsystem 00:07:52.858 treq: not required 00:07:52.858 portid: 0 00:07:52.858 trsvcid: 4420 00:07:52.858 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:52.858 traddr: 10.0.0.2 00:07:52.858 eflags: none 00:07:52.858 sectype: none 00:07:52.858 =====Discovery Log Entry 5====== 00:07:52.858 trtype: tcp 00:07:52.858 adrfam: ipv4 00:07:52.858 subtype: discovery subsystem referral 00:07:52.858 treq: not required 00:07:52.858 portid: 0 00:07:52.858 trsvcid: 4430 00:07:52.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.858 traddr: 10.0.0.2 00:07:52.858 eflags: none 00:07:52.858 sectype: none 00:07:52.858 10:53:49 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:52.858 Perform nvmf subsystem discovery via RPC 00:07:52.858 10:53:49 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:52.858 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.858 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.858 [ 00:07:52.858 { 00:07:52.858 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:52.858 "subtype": "Discovery", 00:07:52.858 "listen_addresses": [ 00:07:52.858 { 00:07:52.858 "trtype": "TCP", 00:07:52.858 "adrfam": "IPv4", 00:07:52.858 "traddr": "10.0.0.2", 00:07:52.858 "trsvcid": "4420" 00:07:52.858 } 00:07:52.858 ], 00:07:52.858 "allow_any_host": true, 00:07:52.858 "hosts": [] 00:07:52.858 }, 00:07:52.858 { 00:07:52.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.858 "subtype": "NVMe", 00:07:52.858 "listen_addresses": [ 00:07:52.858 { 00:07:52.858 "trtype": "TCP", 00:07:52.858 "adrfam": "IPv4", 00:07:52.859 "traddr": "10.0.0.2", 00:07:52.859 "trsvcid": "4420" 00:07:52.859 } 00:07:52.859 ], 00:07:52.859 "allow_any_host": true, 00:07:52.859 "hosts": [], 00:07:52.859 "serial_number": "SPDK00000000000001", 00:07:52.859 "model_number": "SPDK bdev Controller", 00:07:52.859 "max_namespaces": 32, 00:07:52.859 "min_cntlid": 1, 00:07:52.859 "max_cntlid": 65519, 00:07:52.859 "namespaces": [ 00:07:52.859 { 00:07:52.859 "nsid": 1, 00:07:52.859 "bdev_name": "Null1", 00:07:52.859 "name": "Null1", 00:07:52.859 "nguid": "AB27FF5020B1444597F76C04D3480E9D", 00:07:52.859 "uuid": "ab27ff50-20b1-4445-97f7-6c04d3480e9d" 00:07:52.859 } 00:07:52.859 ] 00:07:52.859 }, 00:07:52.859 { 00:07:52.859 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:52.859 "subtype": "NVMe", 00:07:52.859 "listen_addresses": [ 00:07:52.859 { 00:07:52.859 "trtype": "TCP", 00:07:52.859 "adrfam": "IPv4", 00:07:52.859 "traddr": "10.0.0.2", 00:07:52.859 "trsvcid": "4420" 00:07:52.859 } 00:07:52.859 ], 00:07:52.859 "allow_any_host": true, 00:07:52.859 "hosts": [], 00:07:52.859 "serial_number": "SPDK00000000000002", 00:07:52.859 "model_number": "SPDK bdev Controller", 00:07:52.859 "max_namespaces": 32, 00:07:52.859 "min_cntlid": 1, 00:07:52.859 "max_cntlid": 65519, 00:07:52.859 "namespaces": [ 00:07:52.859 { 00:07:52.859 "nsid": 1, 00:07:52.859 "bdev_name": "Null2", 00:07:52.859 "name": "Null2", 00:07:52.859 "nguid": "88A1B8C6DDF94F6CA7BDB82DC58FFDD4", 00:07:52.859 "uuid": "88a1b8c6-ddf9-4f6c-a7bd-b82dc58ffdd4" 00:07:52.859 } 00:07:52.859 ] 00:07:52.859 }, 00:07:52.859 { 00:07:52.859 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:52.859 "subtype": "NVMe", 00:07:52.859 "listen_addresses": [ 00:07:52.859 { 00:07:52.859 "trtype": "TCP", 00:07:52.859 "adrfam": "IPv4", 00:07:52.859 "traddr": "10.0.0.2", 00:07:52.859 "trsvcid": "4420" 00:07:52.859 } 00:07:52.859 ], 00:07:52.859 "allow_any_host": true, 00:07:52.859 "hosts": [], 00:07:52.859 "serial_number": "SPDK00000000000003", 00:07:52.859 "model_number": "SPDK bdev Controller", 00:07:52.859 "max_namespaces": 32, 00:07:52.859 "min_cntlid": 1, 00:07:52.859 "max_cntlid": 65519, 00:07:52.859 "namespaces": [ 00:07:52.859 { 00:07:52.859 "nsid": 1, 00:07:52.859 "bdev_name": "Null3", 00:07:52.859 "name": "Null3", 00:07:52.859 "nguid": "7441EB360A3843CDBBBD4BE2696B7B38", 00:07:52.859 "uuid": "7441eb36-0a38-43cd-bbbd-4be2696b7b38" 00:07:52.859 } 00:07:52.859 ] 00:07:52.859 }, 00:07:52.859 { 00:07:52.859 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:52.859 "subtype": "NVMe", 00:07:52.859 "listen_addresses": [ 00:07:52.859 { 00:07:52.859 "trtype": "TCP", 00:07:52.859 "adrfam": "IPv4", 00:07:52.859 "traddr": "10.0.0.2", 00:07:52.859 "trsvcid": "4420" 00:07:52.859 } 00:07:52.859 ], 00:07:52.859 "allow_any_host": true, 00:07:52.859 "hosts": [], 00:07:52.859 "serial_number": "SPDK00000000000004", 00:07:52.859 "model_number": "SPDK bdev Controller", 00:07:52.859 "max_namespaces": 32, 00:07:52.859 "min_cntlid": 1, 00:07:52.859 "max_cntlid": 65519, 00:07:52.859 "namespaces": [ 00:07:52.859 { 00:07:52.859 "nsid": 1, 00:07:52.859 "bdev_name": "Null4", 00:07:52.859 "name": "Null4", 00:07:52.859 "nguid": "ED83A013ECA14CFD999B6F0DCA7B04A7", 00:07:52.859 "uuid": "ed83a013-eca1-4cfd-999b-6f0dca7b04a7" 00:07:52.859 } 00:07:52.859 ] 00:07:52.859 } 00:07:52.859 ] 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@42 -- # seq 1 4 00:07:52.859 10:53:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.859 10:53:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.859 10:53:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.859 10:53:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.859 10:53:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:52.859 10:53:49 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:52.859 10:53:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.859 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 10:53:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.859 10:53:49 -- target/discovery.sh@49 -- # check_bdevs= 00:07:52.859 10:53:49 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:52.859 10:53:49 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:52.859 10:53:49 -- target/discovery.sh@57 -- # nvmftestfini 00:07:52.859 10:53:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:52.859 10:53:49 -- nvmf/common.sh@117 -- # sync 00:07:52.859 10:53:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.859 10:53:49 -- nvmf/common.sh@120 -- # set +e 00:07:52.859 10:53:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.859 10:53:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.859 rmmod nvme_tcp 00:07:52.859 rmmod nvme_fabrics 00:07:52.859 rmmod nvme_keyring 00:07:52.859 10:53:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.859 10:53:49 -- nvmf/common.sh@124 -- # set -e 00:07:52.859 10:53:49 -- nvmf/common.sh@125 -- # return 0 00:07:52.859 10:53:49 -- nvmf/common.sh@478 -- # '[' -n 173747 ']' 00:07:52.859 10:53:49 -- nvmf/common.sh@479 -- # killprocess 173747 00:07:52.859 10:53:49 -- common/autotest_common.sh@946 -- # '[' -z 173747 ']' 00:07:52.859 10:53:49 -- common/autotest_common.sh@950 -- # kill -0 173747 00:07:52.859 10:53:49 -- common/autotest_common.sh@951 -- # uname 00:07:52.859 10:53:49 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.859 10:53:49 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 173747 00:07:53.121 10:53:49 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:53.121 10:53:49 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:53.121 10:53:49 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 173747' 00:07:53.121 killing process with pid 173747 00:07:53.121 10:53:49 -- common/autotest_common.sh@965 -- # kill 173747 00:07:53.121 [2024-05-15 10:53:49.552022] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:53.121 10:53:49 -- common/autotest_common.sh@970 -- # wait 173747 00:07:53.121 10:53:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:53.121 10:53:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:53.121 10:53:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:53.121 10:53:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.121 10:53:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.121 10:53:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.121 10:53:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.122 10:53:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.672 10:53:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:55.672 00:07:55.672 real 0m10.839s 00:07:55.672 user 0m7.649s 00:07:55.672 sys 0m5.569s 00:07:55.672 10:53:51 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.672 10:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:55.672 ************************************ 00:07:55.672 END TEST nvmf_target_discovery 00:07:55.672 ************************************ 00:07:55.672 10:53:51 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:55.672 10:53:51 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:55.672 10:53:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.672 10:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:55.672 ************************************ 00:07:55.672 START TEST nvmf_referrals 00:07:55.672 ************************************ 00:07:55.672 10:53:51 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:55.672 * Looking for test storage... 00:07:55.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.672 10:53:51 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.672 10:53:51 -- nvmf/common.sh@7 -- # uname -s 00:07:55.672 10:53:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.672 10:53:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.672 10:53:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.672 10:53:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.672 10:53:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.672 10:53:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.672 10:53:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.672 10:53:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.672 10:53:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.672 10:53:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.672 10:53:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.672 10:53:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.672 10:53:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.672 10:53:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.672 10:53:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.672 10:53:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.672 10:53:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.672 10:53:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.672 10:53:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.672 10:53:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.673 10:53:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.673 10:53:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.673 10:53:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.673 10:53:51 -- paths/export.sh@5 -- # export PATH 00:07:55.673 10:53:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.673 10:53:51 -- nvmf/common.sh@47 -- # : 0 00:07:55.673 10:53:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.673 10:53:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.673 10:53:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.673 10:53:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.673 10:53:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.673 10:53:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.673 10:53:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.673 10:53:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.673 10:53:51 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:55.673 10:53:51 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:55.673 10:53:51 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:55.673 10:53:51 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:55.673 10:53:51 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:55.673 10:53:51 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:55.673 10:53:51 -- target/referrals.sh@37 -- # nvmftestinit 00:07:55.673 10:53:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:55.673 10:53:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.673 10:53:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:55.673 10:53:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:55.673 10:53:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:55.673 10:53:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.673 10:53:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.673 10:53:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.673 10:53:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:55.673 10:53:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:55.673 10:53:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.673 10:53:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.258 10:53:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:02.258 10:53:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.258 10:53:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.258 10:53:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.258 10:53:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.258 10:53:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.258 10:53:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.258 10:53:58 -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.258 10:53:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.258 10:53:58 -- nvmf/common.sh@296 -- # e810=() 00:08:02.258 10:53:58 -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.258 10:53:58 -- nvmf/common.sh@297 -- # x722=() 00:08:02.258 10:53:58 -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.258 10:53:58 -- nvmf/common.sh@298 -- # mlx=() 00:08:02.258 10:53:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.258 10:53:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.258 10:53:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.258 10:53:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.258 10:53:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.258 10:53:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.258 10:53:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:02.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:02.258 10:53:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.258 10:53:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:02.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:02.258 10:53:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.258 10:53:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.258 10:53:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.258 10:53:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.258 10:53:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.258 10:53:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:02.258 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:02.258 10:53:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.258 10:53:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.258 10:53:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.258 10:53:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.258 10:53:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.258 10:53:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:02.258 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:02.258 10:53:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.258 10:53:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:02.258 10:53:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:02.258 10:53:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:02.258 10:53:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:02.258 10:53:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.258 10:53:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.258 10:53:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.258 10:53:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.258 10:53:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.258 10:53:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.258 10:53:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.258 10:53:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.258 10:53:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.258 10:53:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.258 10:53:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.258 10:53:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.258 10:53:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.258 10:53:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.258 10:53:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.258 10:53:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.258 10:53:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.519 10:53:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.519 10:53:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.520 10:53:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:08:02.520 00:08:02.520 --- 10.0.0.2 ping statistics --- 00:08:02.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.520 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:08:02.520 10:53:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:08:02.520 00:08:02.520 --- 10.0.0.1 ping statistics --- 00:08:02.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.520 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:08:02.520 10:53:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.520 10:53:58 -- nvmf/common.sh@411 -- # return 0 00:08:02.520 10:53:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:02.520 10:53:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.520 10:53:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:02.520 10:53:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:02.520 10:53:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.520 10:53:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:02.520 10:53:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:02.520 10:53:59 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:02.520 10:53:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:02.520 10:53:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:02.520 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:02.520 10:53:59 -- nvmf/common.sh@470 -- # nvmfpid=178213 00:08:02.520 10:53:59 -- nvmf/common.sh@471 -- # waitforlisten 178213 00:08:02.520 10:53:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.520 10:53:59 -- common/autotest_common.sh@827 -- # '[' -z 178213 ']' 00:08:02.520 10:53:59 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.520 10:53:59 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.520 10:53:59 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.520 10:53:59 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.520 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:02.520 [2024-05-15 10:53:59.079924] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:08:02.520 [2024-05-15 10:53:59.079986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.520 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.520 [2024-05-15 10:53:59.148379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.782 [2024-05-15 10:53:59.222395] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.782 [2024-05-15 10:53:59.222433] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.782 [2024-05-15 10:53:59.222441] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.782 [2024-05-15 10:53:59.222448] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.782 [2024-05-15 10:53:59.222454] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.782 [2024-05-15 10:53:59.222598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.782 [2024-05-15 10:53:59.222674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.782 [2024-05-15 10:53:59.222839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.782 [2024-05-15 10:53:59.222840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.355 10:53:59 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.355 10:53:59 -- common/autotest_common.sh@860 -- # return 0 00:08:03.355 10:53:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:03.355 10:53:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.355 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.355 10:53:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.355 10:53:59 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.355 10:53:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.355 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.355 [2024-05-15 10:53:59.896138] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.355 10:53:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.355 10:53:59 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:03.355 10:53:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.355 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.355 [2024-05-15 10:53:59.912133] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:03.355 [2024-05-15 10:53:59.912319] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:03.355 10:53:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.355 10:53:59 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:03.355 10:53:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.355 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.355 10:53:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.355 10:53:59 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:03.355 10:53:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.355 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.355 10:53:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.355 10:53:59 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:03.355 10:53:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.355 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.355 10:53:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.355 10:53:59 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.355 10:53:59 -- target/referrals.sh@48 -- # jq length 00:08:03.355 10:53:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.355 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.355 10:53:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.355 10:53:59 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:03.355 10:53:59 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:03.355 10:54:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:03.355 10:54:00 -- target/referrals.sh@21 -- # sort 00:08:03.355 10:54:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.355 10:54:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:03.355 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.355 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.618 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.618 10:54:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:03.618 10:54:00 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:03.618 10:54:00 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:03.618 10:54:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.618 10:54:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.618 10:54:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.618 10:54:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.618 10:54:00 -- target/referrals.sh@26 -- # sort 00:08:03.618 10:54:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:03.618 10:54:00 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:03.618 10:54:00 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:03.618 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.618 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.618 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.618 10:54:00 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:03.618 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.618 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.618 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.618 10:54:00 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:03.618 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.618 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.618 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.618 10:54:00 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.618 10:54:00 -- target/referrals.sh@56 -- # jq length 00:08:03.618 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.618 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.618 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.618 10:54:00 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:03.618 10:54:00 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:03.618 10:54:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.618 10:54:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.618 10:54:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.618 10:54:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.618 10:54:00 -- target/referrals.sh@26 -- # sort 00:08:03.880 10:54:00 -- target/referrals.sh@26 -- # echo 00:08:03.880 10:54:00 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:03.880 10:54:00 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:03.880 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.880 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.880 10:54:00 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:03.880 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.880 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.880 10:54:00 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:03.880 10:54:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:03.880 10:54:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.880 10:54:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:03.880 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.880 10:54:00 -- target/referrals.sh@21 -- # sort 00:08:03.880 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.880 10:54:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:03.880 10:54:00 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:03.880 10:54:00 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:03.880 10:54:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.880 10:54:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.880 10:54:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.880 10:54:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.880 10:54:00 -- target/referrals.sh@26 -- # sort 00:08:04.141 10:54:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:04.141 10:54:00 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.141 10:54:00 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:04.141 10:54:00 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:04.141 10:54:00 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.141 10:54:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.141 10:54:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.404 10:54:00 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:04.404 10:54:00 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.404 10:54:00 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:04.404 10:54:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.404 10:54:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.405 10:54:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.405 10:54:00 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.405 10:54:00 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.405 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.405 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:04.405 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.405 10:54:00 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:04.405 10:54:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.405 10:54:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.405 10:54:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.405 10:54:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.405 10:54:00 -- target/referrals.sh@21 -- # sort 00:08:04.405 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:08:04.405 10:54:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.405 10:54:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:04.405 10:54:00 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.405 10:54:00 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:04.405 10:54:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.405 10:54:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.405 10:54:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.405 10:54:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.405 10:54:00 -- target/referrals.sh@26 -- # sort 00:08:04.405 10:54:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:04.405 10:54:01 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.405 10:54:01 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:04.405 10:54:01 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:04.405 10:54:01 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.405 10:54:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.405 10:54:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.666 10:54:01 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:04.666 10:54:01 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.666 10:54:01 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:04.666 10:54:01 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.666 10:54:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.666 10:54:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.666 10:54:01 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.666 10:54:01 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:04.666 10:54:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.666 10:54:01 -- common/autotest_common.sh@10 -- # set +x 00:08:04.927 10:54:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.927 10:54:01 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.927 10:54:01 -- target/referrals.sh@82 -- # jq length 00:08:04.927 10:54:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.927 10:54:01 -- common/autotest_common.sh@10 -- # set +x 00:08:04.927 10:54:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.927 10:54:01 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:04.927 10:54:01 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:04.927 10:54:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.927 10:54:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.928 10:54:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.928 10:54:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.928 10:54:01 -- target/referrals.sh@26 -- # sort 00:08:04.928 10:54:01 -- target/referrals.sh@26 -- # echo 00:08:04.928 10:54:01 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:04.928 10:54:01 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:04.928 10:54:01 -- target/referrals.sh@86 -- # nvmftestfini 00:08:04.928 10:54:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:04.928 10:54:01 -- nvmf/common.sh@117 -- # sync 00:08:04.928 10:54:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.928 10:54:01 -- nvmf/common.sh@120 -- # set +e 00:08:04.928 10:54:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.928 10:54:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.928 rmmod nvme_tcp 00:08:04.928 rmmod nvme_fabrics 00:08:04.928 rmmod nvme_keyring 00:08:05.189 10:54:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.189 10:54:01 -- nvmf/common.sh@124 -- # set -e 00:08:05.189 10:54:01 -- nvmf/common.sh@125 -- # return 0 00:08:05.189 10:54:01 -- nvmf/common.sh@478 -- # '[' -n 178213 ']' 00:08:05.189 10:54:01 -- nvmf/common.sh@479 -- # killprocess 178213 00:08:05.189 10:54:01 -- common/autotest_common.sh@946 -- # '[' -z 178213 ']' 00:08:05.189 10:54:01 -- common/autotest_common.sh@950 -- # kill -0 178213 00:08:05.189 10:54:01 -- common/autotest_common.sh@951 -- # uname 00:08:05.189 10:54:01 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:05.189 10:54:01 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 178213 00:08:05.189 10:54:01 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:05.189 10:54:01 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:05.189 10:54:01 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 178213' 00:08:05.189 killing process with pid 178213 00:08:05.189 10:54:01 -- common/autotest_common.sh@965 -- # kill 178213 00:08:05.189 [2024-05-15 10:54:01.645606] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:05.189 10:54:01 -- common/autotest_common.sh@970 -- # wait 178213 00:08:05.189 10:54:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:05.189 10:54:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:05.189 10:54:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:05.189 10:54:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.189 10:54:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.189 10:54:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.189 10:54:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.189 10:54:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.737 10:54:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.737 00:08:07.737 real 0m12.002s 00:08:07.737 user 0m13.161s 00:08:07.737 sys 0m5.851s 00:08:07.737 10:54:03 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.737 10:54:03 -- common/autotest_common.sh@10 -- # set +x 00:08:07.737 ************************************ 00:08:07.737 END TEST nvmf_referrals 00:08:07.737 ************************************ 00:08:07.737 10:54:03 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:07.737 10:54:03 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:07.737 10:54:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.737 10:54:03 -- common/autotest_common.sh@10 -- # set +x 00:08:07.737 ************************************ 00:08:07.737 START TEST nvmf_connect_disconnect 00:08:07.737 ************************************ 00:08:07.737 10:54:03 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:07.737 * Looking for test storage... 00:08:07.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.738 10:54:04 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.738 10:54:04 -- nvmf/common.sh@7 -- # uname -s 00:08:07.738 10:54:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.738 10:54:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.738 10:54:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.738 10:54:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.738 10:54:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.738 10:54:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.738 10:54:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.738 10:54:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.738 10:54:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.738 10:54:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.738 10:54:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.738 10:54:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.738 10:54:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.738 10:54:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.738 10:54:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.738 10:54:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.738 10:54:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.738 10:54:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.738 10:54:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.738 10:54:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.738 10:54:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.738 10:54:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.738 10:54:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.738 10:54:04 -- paths/export.sh@5 -- # export PATH 00:08:07.738 10:54:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.738 10:54:04 -- nvmf/common.sh@47 -- # : 0 00:08:07.738 10:54:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.738 10:54:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.738 10:54:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.738 10:54:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.738 10:54:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.738 10:54:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.738 10:54:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.738 10:54:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.738 10:54:04 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.738 10:54:04 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.738 10:54:04 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:07.738 10:54:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:07.738 10:54:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.738 10:54:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:07.738 10:54:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:07.738 10:54:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:07.738 10:54:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.738 10:54:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.738 10:54:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.738 10:54:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:07.738 10:54:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:07.738 10:54:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.738 10:54:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.333 10:54:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:14.333 10:54:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.333 10:54:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.333 10:54:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.333 10:54:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.333 10:54:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.333 10:54:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.333 10:54:10 -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.333 10:54:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.333 10:54:10 -- nvmf/common.sh@296 -- # e810=() 00:08:14.333 10:54:10 -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.333 10:54:10 -- nvmf/common.sh@297 -- # x722=() 00:08:14.333 10:54:10 -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.333 10:54:10 -- nvmf/common.sh@298 -- # mlx=() 00:08:14.333 10:54:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.333 10:54:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.333 10:54:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.333 10:54:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.333 10:54:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.333 10:54:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.333 10:54:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.333 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.333 10:54:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.333 10:54:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.333 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.333 10:54:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.333 10:54:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.333 10:54:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.333 10:54:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.333 10:54:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.333 10:54:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.333 10:54:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.333 10:54:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.333 10:54:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.333 10:54:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.333 10:54:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.333 10:54:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.333 10:54:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.333 10:54:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:14.333 10:54:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:14.333 10:54:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:14.333 10:54:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:14.333 10:54:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.333 10:54:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.333 10:54:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.333 10:54:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.333 10:54:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.333 10:54:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.333 10:54:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.333 10:54:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.333 10:54:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.333 10:54:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.333 10:54:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.333 10:54:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.333 10:54:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.333 10:54:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.333 10:54:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.333 10:54:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.333 10:54:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.595 10:54:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.595 10:54:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.595 10:54:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:08:14.595 00:08:14.595 --- 10.0.0.2 ping statistics --- 00:08:14.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.595 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:08:14.595 10:54:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:08:14.595 00:08:14.595 --- 10.0.0.1 ping statistics --- 00:08:14.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.595 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:14.595 10:54:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.595 10:54:11 -- nvmf/common.sh@411 -- # return 0 00:08:14.595 10:54:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:14.595 10:54:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.595 10:54:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:14.595 10:54:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:14.595 10:54:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.595 10:54:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:14.595 10:54:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:14.595 10:54:11 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:14.595 10:54:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:14.595 10:54:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:14.595 10:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:14.595 10:54:11 -- nvmf/common.sh@470 -- # nvmfpid=183545 00:08:14.595 10:54:11 -- nvmf/common.sh@471 -- # waitforlisten 183545 00:08:14.595 10:54:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.595 10:54:11 -- common/autotest_common.sh@827 -- # '[' -z 183545 ']' 00:08:14.595 10:54:11 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.595 10:54:11 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:14.595 10:54:11 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.595 10:54:11 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:14.595 10:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:14.595 [2024-05-15 10:54:11.192187] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:08:14.595 [2024-05-15 10:54:11.192265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.595 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.856 [2024-05-15 10:54:11.260570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.856 [2024-05-15 10:54:11.334508] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.856 [2024-05-15 10:54:11.334544] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.856 [2024-05-15 10:54:11.334557] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.856 [2024-05-15 10:54:11.334564] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.856 [2024-05-15 10:54:11.334570] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.856 [2024-05-15 10:54:11.334744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.856 [2024-05-15 10:54:11.334860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.856 [2024-05-15 10:54:11.335016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.856 [2024-05-15 10:54:11.335016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.429 10:54:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:15.429 10:54:11 -- common/autotest_common.sh@860 -- # return 0 00:08:15.429 10:54:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:15.429 10:54:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.429 10:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 10:54:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:15.429 10:54:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.429 10:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 [2024-05-15 10:54:12.014070] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.429 10:54:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:15.429 10:54:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.429 10:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 10:54:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.429 10:54:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.429 10:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 10:54:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.429 10:54:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.429 10:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 10:54:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.429 10:54:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.429 10:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 [2024-05-15 10:54:12.073240] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:15.429 [2024-05-15 10:54:12.073460] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.429 10:54:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:15.429 10:54:12 -- target/connect_disconnect.sh@34 -- # set +x 00:08:19.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.765 10:54:30 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:33.765 10:54:30 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:33.765 10:54:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:33.765 10:54:30 -- nvmf/common.sh@117 -- # sync 00:08:33.765 10:54:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.765 10:54:30 -- nvmf/common.sh@120 -- # set +e 00:08:33.765 10:54:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.765 10:54:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.765 rmmod nvme_tcp 00:08:33.765 rmmod nvme_fabrics 00:08:33.765 rmmod nvme_keyring 00:08:34.026 10:54:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.026 10:54:30 -- nvmf/common.sh@124 -- # set -e 00:08:34.026 10:54:30 -- nvmf/common.sh@125 -- # return 0 00:08:34.026 10:54:30 -- nvmf/common.sh@478 -- # '[' -n 183545 ']' 00:08:34.026 10:54:30 -- nvmf/common.sh@479 -- # killprocess 183545 00:08:34.026 10:54:30 -- common/autotest_common.sh@946 -- # '[' -z 183545 ']' 00:08:34.026 10:54:30 -- common/autotest_common.sh@950 -- # kill -0 183545 00:08:34.026 10:54:30 -- common/autotest_common.sh@951 -- # uname 00:08:34.026 10:54:30 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:34.026 10:54:30 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 183545 00:08:34.026 10:54:30 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:34.026 10:54:30 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:34.026 10:54:30 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 183545' 00:08:34.026 killing process with pid 183545 00:08:34.026 10:54:30 -- common/autotest_common.sh@965 -- # kill 183545 00:08:34.026 [2024-05-15 10:54:30.485872] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:34.026 10:54:30 -- common/autotest_common.sh@970 -- # wait 183545 00:08:34.026 10:54:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:34.026 10:54:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:34.026 10:54:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:34.026 10:54:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.026 10:54:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.026 10:54:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.026 10:54:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.026 10:54:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.570 10:54:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.570 00:08:36.570 real 0m28.771s 00:08:36.570 user 1m18.903s 00:08:36.570 sys 0m6.462s 00:08:36.570 10:54:32 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.570 10:54:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.570 ************************************ 00:08:36.570 END TEST nvmf_connect_disconnect 00:08:36.570 ************************************ 00:08:36.570 10:54:32 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:36.570 10:54:32 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:36.570 10:54:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.570 10:54:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.570 ************************************ 00:08:36.570 START TEST nvmf_multitarget 00:08:36.570 ************************************ 00:08:36.570 10:54:32 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:36.570 * Looking for test storage... 00:08:36.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.570 10:54:32 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.570 10:54:32 -- nvmf/common.sh@7 -- # uname -s 00:08:36.570 10:54:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.570 10:54:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.570 10:54:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.570 10:54:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.570 10:54:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.570 10:54:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.570 10:54:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.570 10:54:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.570 10:54:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.570 10:54:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.570 10:54:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.570 10:54:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.570 10:54:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.570 10:54:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.570 10:54:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.570 10:54:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.570 10:54:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.570 10:54:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.570 10:54:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.570 10:54:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.570 10:54:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.570 10:54:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.570 10:54:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.570 10:54:32 -- paths/export.sh@5 -- # export PATH 00:08:36.570 10:54:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.570 10:54:32 -- nvmf/common.sh@47 -- # : 0 00:08:36.570 10:54:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.570 10:54:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.570 10:54:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.570 10:54:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.570 10:54:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.570 10:54:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.570 10:54:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.570 10:54:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.570 10:54:32 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:36.570 10:54:32 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:36.570 10:54:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:36.570 10:54:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.570 10:54:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:36.570 10:54:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:36.570 10:54:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:36.570 10:54:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.570 10:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.570 10:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.570 10:54:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:36.570 10:54:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:36.570 10:54:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.570 10:54:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.159 10:54:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:43.159 10:54:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.159 10:54:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.159 10:54:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.159 10:54:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.159 10:54:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.159 10:54:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.159 10:54:39 -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.159 10:54:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.159 10:54:39 -- nvmf/common.sh@296 -- # e810=() 00:08:43.159 10:54:39 -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.159 10:54:39 -- nvmf/common.sh@297 -- # x722=() 00:08:43.159 10:54:39 -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.159 10:54:39 -- nvmf/common.sh@298 -- # mlx=() 00:08:43.159 10:54:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.159 10:54:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.159 10:54:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.159 10:54:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.159 10:54:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.159 10:54:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.159 10:54:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:43.159 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:43.159 10:54:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.159 10:54:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:43.159 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:43.159 10:54:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.159 10:54:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.159 10:54:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.159 10:54:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:43.159 10:54:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.159 10:54:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:43.159 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:43.159 10:54:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.159 10:54:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.159 10:54:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.159 10:54:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:43.159 10:54:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.159 10:54:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:43.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:43.159 10:54:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.159 10:54:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:43.159 10:54:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:43.159 10:54:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:43.159 10:54:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:43.160 10:54:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.160 10:54:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.160 10:54:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.160 10:54:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.160 10:54:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.160 10:54:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.160 10:54:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.160 10:54:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.160 10:54:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.160 10:54:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.160 10:54:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.160 10:54:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.160 10:54:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.422 10:54:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.422 10:54:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.422 10:54:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.422 10:54:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.422 10:54:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.422 10:54:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.422 10:54:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:08:43.423 00:08:43.423 --- 10.0.0.2 ping statistics --- 00:08:43.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.423 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:08:43.423 10:54:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:08:43.423 00:08:43.423 --- 10.0.0.1 ping statistics --- 00:08:43.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.423 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:43.423 10:54:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.423 10:54:39 -- nvmf/common.sh@411 -- # return 0 00:08:43.423 10:54:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:43.423 10:54:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.423 10:54:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:43.423 10:54:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:43.423 10:54:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.423 10:54:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:43.423 10:54:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:43.423 10:54:40 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:43.423 10:54:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:43.423 10:54:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:43.423 10:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:43.423 10:54:40 -- nvmf/common.sh@470 -- # nvmfpid=191665 00:08:43.423 10:54:40 -- nvmf/common.sh@471 -- # waitforlisten 191665 00:08:43.423 10:54:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.423 10:54:40 -- common/autotest_common.sh@827 -- # '[' -z 191665 ']' 00:08:43.423 10:54:40 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.423 10:54:40 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:43.423 10:54:40 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.423 10:54:40 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:43.423 10:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:43.684 [2024-05-15 10:54:40.082661] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:08:43.684 [2024-05-15 10:54:40.082732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.684 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.684 [2024-05-15 10:54:40.153982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.684 [2024-05-15 10:54:40.228685] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.684 [2024-05-15 10:54:40.228724] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.684 [2024-05-15 10:54:40.228732] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.684 [2024-05-15 10:54:40.228740] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.684 [2024-05-15 10:54:40.228747] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.684 [2024-05-15 10:54:40.228892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.684 [2024-05-15 10:54:40.229003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.684 [2024-05-15 10:54:40.229158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.684 [2024-05-15 10:54:40.229159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.257 10:54:40 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:44.257 10:54:40 -- common/autotest_common.sh@860 -- # return 0 00:08:44.257 10:54:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:44.257 10:54:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.257 10:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:44.257 10:54:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.257 10:54:40 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:44.519 10:54:40 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.519 10:54:40 -- target/multitarget.sh@21 -- # jq length 00:08:44.519 10:54:41 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:44.519 10:54:41 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:44.519 "nvmf_tgt_1" 00:08:44.519 10:54:41 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:44.786 "nvmf_tgt_2" 00:08:44.786 10:54:41 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.786 10:54:41 -- target/multitarget.sh@28 -- # jq length 00:08:44.786 10:54:41 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:44.786 10:54:41 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:44.787 true 00:08:44.787 10:54:41 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:45.049 true 00:08:45.049 10:54:41 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.049 10:54:41 -- target/multitarget.sh@35 -- # jq length 00:08:45.049 10:54:41 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:45.049 10:54:41 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:45.049 10:54:41 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:45.049 10:54:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:45.049 10:54:41 -- nvmf/common.sh@117 -- # sync 00:08:45.049 10:54:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.049 10:54:41 -- nvmf/common.sh@120 -- # set +e 00:08:45.049 10:54:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.049 10:54:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.049 rmmod nvme_tcp 00:08:45.049 rmmod nvme_fabrics 00:08:45.049 rmmod nvme_keyring 00:08:45.049 10:54:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.049 10:54:41 -- nvmf/common.sh@124 -- # set -e 00:08:45.049 10:54:41 -- nvmf/common.sh@125 -- # return 0 00:08:45.049 10:54:41 -- nvmf/common.sh@478 -- # '[' -n 191665 ']' 00:08:45.049 10:54:41 -- nvmf/common.sh@479 -- # killprocess 191665 00:08:45.049 10:54:41 -- common/autotest_common.sh@946 -- # '[' -z 191665 ']' 00:08:45.049 10:54:41 -- common/autotest_common.sh@950 -- # kill -0 191665 00:08:45.049 10:54:41 -- common/autotest_common.sh@951 -- # uname 00:08:45.049 10:54:41 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:45.049 10:54:41 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 191665 00:08:45.309 10:54:41 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:45.309 10:54:41 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:45.309 10:54:41 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 191665' 00:08:45.309 killing process with pid 191665 00:08:45.309 10:54:41 -- common/autotest_common.sh@965 -- # kill 191665 00:08:45.309 10:54:41 -- common/autotest_common.sh@970 -- # wait 191665 00:08:45.309 10:54:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:45.309 10:54:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:45.309 10:54:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:45.309 10:54:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.309 10:54:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.309 10:54:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.309 10:54:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.309 10:54:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.858 10:54:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.858 00:08:47.858 real 0m11.153s 00:08:47.858 user 0m9.273s 00:08:47.858 sys 0m5.733s 00:08:47.858 10:54:43 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.858 10:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:47.858 ************************************ 00:08:47.858 END TEST nvmf_multitarget 00:08:47.858 ************************************ 00:08:47.858 10:54:43 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:47.858 10:54:43 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:47.858 10:54:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.858 10:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:47.858 ************************************ 00:08:47.858 START TEST nvmf_rpc 00:08:47.858 ************************************ 00:08:47.858 10:54:44 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:47.858 * Looking for test storage... 00:08:47.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.858 10:54:44 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.858 10:54:44 -- nvmf/common.sh@7 -- # uname -s 00:08:47.858 10:54:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.858 10:54:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.858 10:54:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.858 10:54:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.858 10:54:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.858 10:54:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.858 10:54:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.858 10:54:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.858 10:54:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.858 10:54:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.858 10:54:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:47.858 10:54:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:47.858 10:54:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.858 10:54:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.858 10:54:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.858 10:54:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.858 10:54:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.858 10:54:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.858 10:54:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.858 10:54:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.858 10:54:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.858 10:54:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.858 10:54:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.858 10:54:44 -- paths/export.sh@5 -- # export PATH 00:08:47.858 10:54:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.858 10:54:44 -- nvmf/common.sh@47 -- # : 0 00:08:47.858 10:54:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.858 10:54:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.858 10:54:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.858 10:54:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.858 10:54:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.858 10:54:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.858 10:54:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.858 10:54:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.858 10:54:44 -- target/rpc.sh@11 -- # loops=5 00:08:47.858 10:54:44 -- target/rpc.sh@23 -- # nvmftestinit 00:08:47.858 10:54:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:47.858 10:54:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.858 10:54:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:47.858 10:54:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:47.858 10:54:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:47.858 10:54:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.858 10:54:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.858 10:54:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.858 10:54:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:47.858 10:54:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:47.858 10:54:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.858 10:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:54.453 10:54:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:54.453 10:54:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.453 10:54:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.453 10:54:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.453 10:54:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.453 10:54:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.453 10:54:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.453 10:54:50 -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.453 10:54:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.453 10:54:50 -- nvmf/common.sh@296 -- # e810=() 00:08:54.453 10:54:50 -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.453 10:54:50 -- nvmf/common.sh@297 -- # x722=() 00:08:54.453 10:54:50 -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.453 10:54:50 -- nvmf/common.sh@298 -- # mlx=() 00:08:54.453 10:54:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.453 10:54:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.454 10:54:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.454 10:54:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.454 10:54:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.454 10:54:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.454 10:54:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:54.454 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:54.454 10:54:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.454 10:54:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:54.454 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:54.454 10:54:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.454 10:54:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.454 10:54:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.454 10:54:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.454 10:54:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.454 10:54:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:54.454 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:54.454 10:54:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.454 10:54:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.454 10:54:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.454 10:54:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.454 10:54:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.454 10:54:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:54.454 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:54.454 10:54:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.454 10:54:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:54.454 10:54:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:54.454 10:54:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:54.454 10:54:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.454 10:54:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.454 10:54:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.454 10:54:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.454 10:54:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.454 10:54:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.454 10:54:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.454 10:54:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.454 10:54:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.454 10:54:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.454 10:54:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.454 10:54:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.454 10:54:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.454 10:54:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.454 10:54:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.454 10:54:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.454 10:54:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.454 10:54:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.454 10:54:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.454 10:54:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:08:54.454 00:08:54.454 --- 10.0.0.2 ping statistics --- 00:08:54.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.454 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:08:54.454 10:54:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:54.454 00:08:54.454 --- 10.0.0.1 ping statistics --- 00:08:54.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.454 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:54.454 10:54:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.454 10:54:50 -- nvmf/common.sh@411 -- # return 0 00:08:54.454 10:54:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:54.454 10:54:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.454 10:54:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:54.454 10:54:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.454 10:54:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:54.454 10:54:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:54.454 10:54:50 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:54.454 10:54:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:54.454 10:54:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:54.454 10:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:54.454 10:54:50 -- nvmf/common.sh@470 -- # nvmfpid=196031 00:08:54.454 10:54:50 -- nvmf/common.sh@471 -- # waitforlisten 196031 00:08:54.454 10:54:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.454 10:54:50 -- common/autotest_common.sh@827 -- # '[' -z 196031 ']' 00:08:54.454 10:54:50 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.454 10:54:50 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:54.454 10:54:50 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.454 10:54:50 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:54.454 10:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:54.454 [2024-05-15 10:54:50.943285] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:08:54.454 [2024-05-15 10:54:50.943351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.454 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.454 [2024-05-15 10:54:51.013201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.454 [2024-05-15 10:54:51.089052] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.454 [2024-05-15 10:54:51.089089] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.454 [2024-05-15 10:54:51.089097] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.454 [2024-05-15 10:54:51.089104] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.454 [2024-05-15 10:54:51.089110] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.454 [2024-05-15 10:54:51.089260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.454 [2024-05-15 10:54:51.089365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.454 [2024-05-15 10:54:51.089521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.454 [2024-05-15 10:54:51.089522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.398 10:54:51 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:55.398 10:54:51 -- common/autotest_common.sh@860 -- # return 0 00:08:55.398 10:54:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:55.398 10:54:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.398 10:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:55.398 10:54:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.398 10:54:51 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:55.398 10:54:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.398 10:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:55.398 10:54:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.398 10:54:51 -- target/rpc.sh@26 -- # stats='{ 00:08:55.398 "tick_rate": 2400000000, 00:08:55.398 "poll_groups": [ 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_000", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.398 "completed_nvme_io": 0, 00:08:55.398 "transports": [] 00:08:55.398 }, 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_001", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.398 "completed_nvme_io": 0, 00:08:55.398 "transports": [] 00:08:55.398 }, 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_002", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.398 "completed_nvme_io": 0, 00:08:55.398 "transports": [] 00:08:55.398 }, 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_003", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.398 "completed_nvme_io": 0, 00:08:55.398 "transports": [] 00:08:55.398 } 00:08:55.398 ] 00:08:55.398 }' 00:08:55.398 10:54:51 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:55.398 10:54:51 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:55.398 10:54:51 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:55.398 10:54:51 -- target/rpc.sh@15 -- # wc -l 00:08:55.398 10:54:51 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:55.398 10:54:51 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:55.398 10:54:51 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:55.398 10:54:51 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.398 10:54:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.398 10:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:55.398 [2024-05-15 10:54:51.873454] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.398 10:54:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.398 10:54:51 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:55.398 10:54:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.398 10:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:55.398 10:54:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.398 10:54:51 -- target/rpc.sh@33 -- # stats='{ 00:08:55.398 "tick_rate": 2400000000, 00:08:55.398 "poll_groups": [ 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_000", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.398 "completed_nvme_io": 0, 00:08:55.398 "transports": [ 00:08:55.398 { 00:08:55.398 "trtype": "TCP" 00:08:55.398 } 00:08:55.398 ] 00:08:55.398 }, 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_001", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.398 "completed_nvme_io": 0, 00:08:55.398 "transports": [ 00:08:55.398 { 00:08:55.398 "trtype": "TCP" 00:08:55.398 } 00:08:55.398 ] 00:08:55.398 }, 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_002", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.398 "completed_nvme_io": 0, 00:08:55.398 "transports": [ 00:08:55.398 { 00:08:55.398 "trtype": "TCP" 00:08:55.398 } 00:08:55.398 ] 00:08:55.398 }, 00:08:55.398 { 00:08:55.398 "name": "nvmf_tgt_poll_group_003", 00:08:55.398 "admin_qpairs": 0, 00:08:55.398 "io_qpairs": 0, 00:08:55.398 "current_admin_qpairs": 0, 00:08:55.398 "current_io_qpairs": 0, 00:08:55.398 "pending_bdev_io": 0, 00:08:55.399 "completed_nvme_io": 0, 00:08:55.399 "transports": [ 00:08:55.399 { 00:08:55.399 "trtype": "TCP" 00:08:55.399 } 00:08:55.399 ] 00:08:55.399 } 00:08:55.399 ] 00:08:55.399 }' 00:08:55.399 10:54:51 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:55.399 10:54:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:55.399 10:54:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:55.399 10:54:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.399 10:54:51 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:55.399 10:54:51 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:55.399 10:54:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:55.399 10:54:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:55.399 10:54:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.399 10:54:51 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:55.399 10:54:52 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:55.399 10:54:52 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:55.399 10:54:52 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:55.399 10:54:52 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:55.399 10:54:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.399 10:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:55.399 Malloc1 00:08:55.399 10:54:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.399 10:54:52 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.399 10:54:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.399 10:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:55.399 10:54:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.399 10:54:52 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:55.399 10:54:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.399 10:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:55.399 10:54:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.399 10:54:52 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:55.399 10:54:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.399 10:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:55.660 10:54:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.660 10:54:52 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.660 10:54:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.660 10:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:55.660 [2024-05-15 10:54:52.065029] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:55.660 [2024-05-15 10:54:52.065242] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.660 10:54:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.660 10:54:52 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:55.660 10:54:52 -- common/autotest_common.sh@648 -- # local es=0 00:08:55.660 10:54:52 -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:55.660 10:54:52 -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:55.660 10:54:52 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.660 10:54:52 -- common/autotest_common.sh@640 -- # type -t nvme 00:08:55.660 10:54:52 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.660 10:54:52 -- common/autotest_common.sh@642 -- # type -P nvme 00:08:55.660 10:54:52 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.660 10:54:52 -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:55.660 10:54:52 -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:55.660 10:54:52 -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:55.660 [2024-05-15 10:54:52.092068] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:08:55.660 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:55.660 could not add new controller: failed to write to nvme-fabrics device 00:08:55.660 10:54:52 -- common/autotest_common.sh@651 -- # es=1 00:08:55.660 10:54:52 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:55.660 10:54:52 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:55.660 10:54:52 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:55.660 10:54:52 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:55.660 10:54:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.660 10:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:55.660 10:54:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.660 10:54:52 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.047 10:54:53 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.047 10:54:53 -- common/autotest_common.sh@1194 -- # local i=0 00:08:57.047 10:54:53 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.047 10:54:53 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:57.047 10:54:53 -- common/autotest_common.sh@1201 -- # sleep 2 00:08:58.963 10:54:55 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:58.963 10:54:55 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:58.963 10:54:55 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.223 10:54:55 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:59.223 10:54:55 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.223 10:54:55 -- common/autotest_common.sh@1204 -- # return 0 00:08:59.223 10:54:55 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.223 10:54:55 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.223 10:54:55 -- common/autotest_common.sh@1215 -- # local i=0 00:08:59.223 10:54:55 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:59.223 10:54:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.223 10:54:55 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:59.223 10:54:55 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.224 10:54:55 -- common/autotest_common.sh@1227 -- # return 0 00:08:59.224 10:54:55 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.224 10:54:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.224 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:59.224 10:54:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.224 10:54:55 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.224 10:54:55 -- common/autotest_common.sh@648 -- # local es=0 00:08:59.224 10:54:55 -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.224 10:54:55 -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:59.224 10:54:55 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.224 10:54:55 -- common/autotest_common.sh@640 -- # type -t nvme 00:08:59.224 10:54:55 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.224 10:54:55 -- common/autotest_common.sh@642 -- # type -P nvme 00:08:59.224 10:54:55 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.224 10:54:55 -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:59.224 10:54:55 -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:59.224 10:54:55 -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.224 [2024-05-15 10:54:55.799789] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:08:59.224 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:59.224 could not add new controller: failed to write to nvme-fabrics device 00:08:59.224 10:54:55 -- common/autotest_common.sh@651 -- # es=1 00:08:59.224 10:54:55 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.224 10:54:55 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:59.224 10:54:55 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.224 10:54:55 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:59.224 10:54:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.224 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:59.224 10:54:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.224 10:54:55 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.139 10:54:57 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.139 10:54:57 -- common/autotest_common.sh@1194 -- # local i=0 00:09:01.139 10:54:57 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.139 10:54:57 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:01.139 10:54:57 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:03.053 10:54:59 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:03.053 10:54:59 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:03.053 10:54:59 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.053 10:54:59 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:03.053 10:54:59 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.053 10:54:59 -- common/autotest_common.sh@1204 -- # return 0 00:09:03.053 10:54:59 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.053 10:54:59 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.053 10:54:59 -- common/autotest_common.sh@1215 -- # local i=0 00:09:03.053 10:54:59 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:03.053 10:54:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.053 10:54:59 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:03.053 10:54:59 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.053 10:54:59 -- common/autotest_common.sh@1227 -- # return 0 00:09:03.053 10:54:59 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.053 10:54:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.053 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:03.053 10:54:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.053 10:54:59 -- target/rpc.sh@81 -- # seq 1 5 00:09:03.053 10:54:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:03.053 10:54:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:03.053 10:54:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.053 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:03.053 10:54:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.053 10:54:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.053 10:54:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.053 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:03.053 [2024-05-15 10:54:59.545104] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.053 10:54:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.053 10:54:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:03.053 10:54:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.053 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:03.053 10:54:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.053 10:54:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:03.053 10:54:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.053 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:03.053 10:54:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.053 10:54:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.441 10:55:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.441 10:55:01 -- common/autotest_common.sh@1194 -- # local i=0 00:09:04.441 10:55:01 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.441 10:55:01 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:04.441 10:55:01 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:06.990 10:55:03 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:06.990 10:55:03 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:06.990 10:55:03 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.990 10:55:03 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:06.990 10:55:03 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.990 10:55:03 -- common/autotest_common.sh@1204 -- # return 0 00:09:06.990 10:55:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.990 10:55:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:06.990 10:55:03 -- common/autotest_common.sh@1215 -- # local i=0 00:09:06.990 10:55:03 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:06.990 10:55:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.990 10:55:03 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:06.990 10:55:03 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.990 10:55:03 -- common/autotest_common.sh@1227 -- # return 0 00:09:06.990 10:55:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.990 10:55:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.990 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:06.990 10:55:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.990 10:55:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.990 10:55:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.990 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:06.990 10:55:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.990 10:55:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:06.990 10:55:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:06.990 10:55:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.990 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:06.990 10:55:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.990 10:55:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.990 10:55:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.991 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:06.991 [2024-05-15 10:55:03.239570] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.991 10:55:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.991 10:55:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:06.991 10:55:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.991 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:06.991 10:55:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.991 10:55:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:06.991 10:55:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.991 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:06.991 10:55:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.991 10:55:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.376 10:55:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.376 10:55:04 -- common/autotest_common.sh@1194 -- # local i=0 00:09:08.376 10:55:04 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.376 10:55:04 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:08.376 10:55:04 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:10.289 10:55:06 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:10.289 10:55:06 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:10.289 10:55:06 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.289 10:55:06 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:10.289 10:55:06 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.289 10:55:06 -- common/autotest_common.sh@1204 -- # return 0 00:09:10.289 10:55:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.289 10:55:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.289 10:55:06 -- common/autotest_common.sh@1215 -- # local i=0 00:09:10.289 10:55:06 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:10.289 10:55:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.289 10:55:06 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:10.289 10:55:06 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.289 10:55:06 -- common/autotest_common.sh@1227 -- # return 0 00:09:10.289 10:55:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:10.289 10:55:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.289 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:10.289 10:55:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.289 10:55:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.289 10:55:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.289 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:10.289 10:55:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.289 10:55:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:10.289 10:55:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:10.289 10:55:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.289 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:10.289 10:55:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.289 10:55:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.289 10:55:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.289 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:10.289 [2024-05-15 10:55:06.938473] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.551 10:55:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.551 10:55:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:10.551 10:55:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.551 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:10.551 10:55:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.551 10:55:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:10.551 10:55:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.551 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:10.551 10:55:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.551 10:55:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.937 10:55:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.937 10:55:08 -- common/autotest_common.sh@1194 -- # local i=0 00:09:11.937 10:55:08 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.937 10:55:08 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:11.937 10:55:08 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:13.853 10:55:10 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:13.853 10:55:10 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:13.853 10:55:10 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.853 10:55:10 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:13.853 10:55:10 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.853 10:55:10 -- common/autotest_common.sh@1204 -- # return 0 00:09:13.853 10:55:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.114 10:55:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.114 10:55:10 -- common/autotest_common.sh@1215 -- # local i=0 00:09:14.114 10:55:10 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:14.114 10:55:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.114 10:55:10 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:14.114 10:55:10 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.114 10:55:10 -- common/autotest_common.sh@1227 -- # return 0 00:09:14.114 10:55:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.114 10:55:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.114 10:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.114 10:55:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.114 10:55:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.114 10:55:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.114 10:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.114 10:55:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.114 10:55:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:14.114 10:55:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.114 10:55:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.114 10:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.114 10:55:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.114 10:55:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.114 10:55:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.114 10:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.114 [2024-05-15 10:55:10.627952] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.114 10:55:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.114 10:55:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:14.114 10:55:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.114 10:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.114 10:55:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.114 10:55:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.114 10:55:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.114 10:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.114 10:55:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.114 10:55:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.499 10:55:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.500 10:55:12 -- common/autotest_common.sh@1194 -- # local i=0 00:09:15.500 10:55:12 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.500 10:55:12 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:15.500 10:55:12 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:18.046 10:55:14 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:18.046 10:55:14 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:18.046 10:55:14 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.046 10:55:14 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:18.046 10:55:14 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.046 10:55:14 -- common/autotest_common.sh@1204 -- # return 0 00:09:18.046 10:55:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.046 10:55:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.046 10:55:14 -- common/autotest_common.sh@1215 -- # local i=0 00:09:18.046 10:55:14 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:18.046 10:55:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.046 10:55:14 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:18.046 10:55:14 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.046 10:55:14 -- common/autotest_common.sh@1227 -- # return 0 00:09:18.046 10:55:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.046 10:55:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.046 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 10:55:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.046 10:55:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.046 10:55:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.046 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 10:55:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.046 10:55:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:18.046 10:55:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:18.046 10:55:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.046 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 10:55:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.046 10:55:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.046 10:55:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.046 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 [2024-05-15 10:55:14.325105] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.046 10:55:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.046 10:55:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:18.046 10:55:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.046 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 10:55:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.046 10:55:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:18.046 10:55:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.046 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 10:55:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.046 10:55:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.433 10:55:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.433 10:55:15 -- common/autotest_common.sh@1194 -- # local i=0 00:09:19.433 10:55:15 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.433 10:55:15 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:19.433 10:55:15 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:21.411 10:55:17 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:21.412 10:55:17 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:21.412 10:55:17 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.412 10:55:17 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:21.412 10:55:17 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.412 10:55:17 -- common/autotest_common.sh@1204 -- # return 0 00:09:21.412 10:55:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.412 10:55:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.412 10:55:17 -- common/autotest_common.sh@1215 -- # local i=0 00:09:21.412 10:55:17 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:21.412 10:55:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.412 10:55:17 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:21.412 10:55:17 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.412 10:55:17 -- common/autotest_common.sh@1227 -- # return 0 00:09:21.412 10:55:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.412 10:55:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:17 -- common/autotest_common.sh@10 -- # set +x 00:09:21.412 10:55:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.412 10:55:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.412 10:55:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:17 -- common/autotest_common.sh@10 -- # set +x 00:09:21.412 10:55:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.412 10:55:18 -- target/rpc.sh@99 -- # seq 1 5 00:09:21.412 10:55:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:21.412 10:55:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.412 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.412 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.412 10:55:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.412 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.412 [2024-05-15 10:55:18.024725] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.412 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.412 10:55:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.412 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.412 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.412 10:55:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.412 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.412 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.412 10:55:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.412 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.412 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.412 10:55:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.412 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.412 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:21.674 10:55:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 [2024-05-15 10:55:18.084864] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:21.674 10:55:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 [2024-05-15 10:55:18.141027] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:21.674 10:55:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 [2024-05-15 10:55:18.201222] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:21.674 10:55:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 [2024-05-15 10:55:18.261440] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.674 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.674 10:55:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.674 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.674 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.675 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.675 10:55:18 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:21.675 10:55:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.675 10:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:21.675 10:55:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.675 10:55:18 -- target/rpc.sh@110 -- # stats='{ 00:09:21.675 "tick_rate": 2400000000, 00:09:21.675 "poll_groups": [ 00:09:21.675 { 00:09:21.675 "name": "nvmf_tgt_poll_group_000", 00:09:21.675 "admin_qpairs": 0, 00:09:21.675 "io_qpairs": 224, 00:09:21.675 "current_admin_qpairs": 0, 00:09:21.675 "current_io_qpairs": 0, 00:09:21.675 "pending_bdev_io": 0, 00:09:21.675 "completed_nvme_io": 225, 00:09:21.675 "transports": [ 00:09:21.675 { 00:09:21.675 "trtype": "TCP" 00:09:21.675 } 00:09:21.675 ] 00:09:21.675 }, 00:09:21.675 { 00:09:21.675 "name": "nvmf_tgt_poll_group_001", 00:09:21.675 "admin_qpairs": 1, 00:09:21.675 "io_qpairs": 223, 00:09:21.675 "current_admin_qpairs": 0, 00:09:21.675 "current_io_qpairs": 0, 00:09:21.675 "pending_bdev_io": 0, 00:09:21.675 "completed_nvme_io": 228, 00:09:21.675 "transports": [ 00:09:21.675 { 00:09:21.675 "trtype": "TCP" 00:09:21.675 } 00:09:21.675 ] 00:09:21.675 }, 00:09:21.675 { 00:09:21.675 "name": "nvmf_tgt_poll_group_002", 00:09:21.675 "admin_qpairs": 6, 00:09:21.675 "io_qpairs": 218, 00:09:21.675 "current_admin_qpairs": 0, 00:09:21.675 "current_io_qpairs": 0, 00:09:21.675 "pending_bdev_io": 0, 00:09:21.675 "completed_nvme_io": 246, 00:09:21.675 "transports": [ 00:09:21.675 { 00:09:21.675 "trtype": "TCP" 00:09:21.675 } 00:09:21.675 ] 00:09:21.675 }, 00:09:21.675 { 00:09:21.675 "name": "nvmf_tgt_poll_group_003", 00:09:21.675 "admin_qpairs": 0, 00:09:21.675 "io_qpairs": 224, 00:09:21.675 "current_admin_qpairs": 0, 00:09:21.675 "current_io_qpairs": 0, 00:09:21.675 "pending_bdev_io": 0, 00:09:21.675 "completed_nvme_io": 540, 00:09:21.675 "transports": [ 00:09:21.675 { 00:09:21.675 "trtype": "TCP" 00:09:21.675 } 00:09:21.675 ] 00:09:21.675 } 00:09:21.675 ] 00:09:21.675 }' 00:09:21.675 10:55:18 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:21.675 10:55:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:21.675 10:55:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:21.675 10:55:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.937 10:55:18 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:21.937 10:55:18 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:21.937 10:55:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:21.937 10:55:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:21.937 10:55:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.937 10:55:18 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:21.937 10:55:18 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:21.937 10:55:18 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:21.937 10:55:18 -- target/rpc.sh@123 -- # nvmftestfini 00:09:21.937 10:55:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:21.937 10:55:18 -- nvmf/common.sh@117 -- # sync 00:09:21.937 10:55:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.937 10:55:18 -- nvmf/common.sh@120 -- # set +e 00:09:21.937 10:55:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.937 10:55:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.937 rmmod nvme_tcp 00:09:21.937 rmmod nvme_fabrics 00:09:21.937 rmmod nvme_keyring 00:09:21.937 10:55:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.937 10:55:18 -- nvmf/common.sh@124 -- # set -e 00:09:21.937 10:55:18 -- nvmf/common.sh@125 -- # return 0 00:09:21.937 10:55:18 -- nvmf/common.sh@478 -- # '[' -n 196031 ']' 00:09:21.937 10:55:18 -- nvmf/common.sh@479 -- # killprocess 196031 00:09:21.937 10:55:18 -- common/autotest_common.sh@946 -- # '[' -z 196031 ']' 00:09:21.937 10:55:18 -- common/autotest_common.sh@950 -- # kill -0 196031 00:09:21.937 10:55:18 -- common/autotest_common.sh@951 -- # uname 00:09:21.937 10:55:18 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:21.937 10:55:18 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196031 00:09:21.937 10:55:18 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:21.937 10:55:18 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:21.937 10:55:18 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196031' 00:09:21.937 killing process with pid 196031 00:09:21.937 10:55:18 -- common/autotest_common.sh@965 -- # kill 196031 00:09:21.937 [2024-05-15 10:55:18.547406] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:21.937 10:55:18 -- common/autotest_common.sh@970 -- # wait 196031 00:09:22.198 10:55:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:22.198 10:55:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:22.198 10:55:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:22.198 10:55:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.198 10:55:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.198 10:55:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.198 10:55:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.198 10:55:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.112 10:55:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.112 00:09:24.112 real 0m36.736s 00:09:24.112 user 1m52.158s 00:09:24.112 sys 0m6.853s 00:09:24.112 10:55:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.112 10:55:20 -- common/autotest_common.sh@10 -- # set +x 00:09:24.112 ************************************ 00:09:24.112 END TEST nvmf_rpc 00:09:24.112 ************************************ 00:09:24.374 10:55:20 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:24.374 10:55:20 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:24.374 10:55:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.374 10:55:20 -- common/autotest_common.sh@10 -- # set +x 00:09:24.374 ************************************ 00:09:24.374 START TEST nvmf_invalid 00:09:24.374 ************************************ 00:09:24.374 10:55:20 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:24.374 * Looking for test storage... 00:09:24.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.374 10:55:20 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.374 10:55:20 -- nvmf/common.sh@7 -- # uname -s 00:09:24.374 10:55:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.374 10:55:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.374 10:55:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.374 10:55:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.374 10:55:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.374 10:55:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.374 10:55:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.374 10:55:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.374 10:55:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.374 10:55:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.374 10:55:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:24.374 10:55:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:24.374 10:55:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.374 10:55:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.374 10:55:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.374 10:55:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.374 10:55:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.374 10:55:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.374 10:55:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.374 10:55:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.374 10:55:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.374 10:55:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.374 10:55:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.374 10:55:20 -- paths/export.sh@5 -- # export PATH 00:09:24.375 10:55:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.375 10:55:20 -- nvmf/common.sh@47 -- # : 0 00:09:24.375 10:55:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.375 10:55:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.375 10:55:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.375 10:55:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.375 10:55:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.375 10:55:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.375 10:55:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.375 10:55:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.375 10:55:20 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:24.375 10:55:20 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.375 10:55:20 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:24.375 10:55:20 -- target/invalid.sh@14 -- # target=foobar 00:09:24.375 10:55:20 -- target/invalid.sh@16 -- # RANDOM=0 00:09:24.375 10:55:20 -- target/invalid.sh@34 -- # nvmftestinit 00:09:24.375 10:55:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:24.375 10:55:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.375 10:55:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:24.375 10:55:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:24.375 10:55:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:24.375 10:55:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.375 10:55:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.375 10:55:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.375 10:55:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:24.375 10:55:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:24.375 10:55:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:24.375 10:55:20 -- common/autotest_common.sh@10 -- # set +x 00:09:30.968 10:55:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:30.968 10:55:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.968 10:55:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.968 10:55:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.968 10:55:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.968 10:55:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.968 10:55:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.968 10:55:27 -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.968 10:55:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.968 10:55:27 -- nvmf/common.sh@296 -- # e810=() 00:09:30.968 10:55:27 -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.968 10:55:27 -- nvmf/common.sh@297 -- # x722=() 00:09:30.968 10:55:27 -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.968 10:55:27 -- nvmf/common.sh@298 -- # mlx=() 00:09:30.968 10:55:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.968 10:55:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.968 10:55:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.968 10:55:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.968 10:55:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.968 10:55:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.968 10:55:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.968 10:55:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.968 10:55:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.968 10:55:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:30.968 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:30.969 10:55:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.969 10:55:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:30.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:30.969 10:55:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.969 10:55:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.969 10:55:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.969 10:55:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:30.969 10:55:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.969 10:55:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:30.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:30.969 10:55:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.969 10:55:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.969 10:55:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.969 10:55:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:30.969 10:55:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.969 10:55:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:30.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:30.969 10:55:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.969 10:55:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:30.969 10:55:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:30.969 10:55:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:30.969 10:55:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:30.969 10:55:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.969 10:55:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.969 10:55:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.969 10:55:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.969 10:55:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.969 10:55:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.969 10:55:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.969 10:55:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.969 10:55:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.969 10:55:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.231 10:55:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.231 10:55:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.231 10:55:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.231 10:55:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.231 10:55:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.231 10:55:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.231 10:55:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.231 10:55:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.231 10:55:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.231 10:55:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:09:31.492 00:09:31.492 --- 10.0.0.2 ping statistics --- 00:09:31.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.492 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:09:31.492 10:55:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:09:31.493 00:09:31.493 --- 10.0.0.1 ping statistics --- 00:09:31.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.493 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:31.493 10:55:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.493 10:55:27 -- nvmf/common.sh@411 -- # return 0 00:09:31.493 10:55:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:31.493 10:55:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.493 10:55:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:31.493 10:55:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:31.493 10:55:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.493 10:55:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:31.493 10:55:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:31.493 10:55:27 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:31.493 10:55:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:31.493 10:55:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:31.493 10:55:27 -- common/autotest_common.sh@10 -- # set +x 00:09:31.493 10:55:27 -- nvmf/common.sh@470 -- # nvmfpid=205882 00:09:31.493 10:55:27 -- nvmf/common.sh@471 -- # waitforlisten 205882 00:09:31.493 10:55:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.493 10:55:27 -- common/autotest_common.sh@827 -- # '[' -z 205882 ']' 00:09:31.493 10:55:27 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.493 10:55:27 -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:31.493 10:55:27 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.493 10:55:27 -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:31.493 10:55:27 -- common/autotest_common.sh@10 -- # set +x 00:09:31.493 [2024-05-15 10:55:28.003484] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:09:31.493 [2024-05-15 10:55:28.003562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.493 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.493 [2024-05-15 10:55:28.074393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.755 [2024-05-15 10:55:28.148057] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.755 [2024-05-15 10:55:28.148097] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.755 [2024-05-15 10:55:28.148105] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.755 [2024-05-15 10:55:28.148113] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.755 [2024-05-15 10:55:28.148119] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.755 [2024-05-15 10:55:28.148264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.755 [2024-05-15 10:55:28.148379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.755 [2024-05-15 10:55:28.148541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.755 [2024-05-15 10:55:28.148541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.328 10:55:28 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:32.328 10:55:28 -- common/autotest_common.sh@860 -- # return 0 00:09:32.328 10:55:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:32.328 10:55:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.328 10:55:28 -- common/autotest_common.sh@10 -- # set +x 00:09:32.328 10:55:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.328 10:55:28 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:32.328 10:55:28 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25254 00:09:32.328 [2024-05-15 10:55:28.961467] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:32.589 10:55:28 -- target/invalid.sh@40 -- # out='request: 00:09:32.589 { 00:09:32.589 "nqn": "nqn.2016-06.io.spdk:cnode25254", 00:09:32.589 "tgt_name": "foobar", 00:09:32.589 "method": "nvmf_create_subsystem", 00:09:32.589 "req_id": 1 00:09:32.589 } 00:09:32.589 Got JSON-RPC error response 00:09:32.589 response: 00:09:32.589 { 00:09:32.589 "code": -32603, 00:09:32.589 "message": "Unable to find target foobar" 00:09:32.589 }' 00:09:32.590 10:55:28 -- target/invalid.sh@41 -- # [[ request: 00:09:32.590 { 00:09:32.590 "nqn": "nqn.2016-06.io.spdk:cnode25254", 00:09:32.590 "tgt_name": "foobar", 00:09:32.590 "method": "nvmf_create_subsystem", 00:09:32.590 "req_id": 1 00:09:32.590 } 00:09:32.590 Got JSON-RPC error response 00:09:32.590 response: 00:09:32.590 { 00:09:32.590 "code": -32603, 00:09:32.590 "message": "Unable to find target foobar" 00:09:32.590 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:32.590 10:55:28 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:32.590 10:55:28 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21537 00:09:32.590 [2024-05-15 10:55:29.138056] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21537: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:32.590 10:55:29 -- target/invalid.sh@45 -- # out='request: 00:09:32.590 { 00:09:32.590 "nqn": "nqn.2016-06.io.spdk:cnode21537", 00:09:32.590 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:32.590 "method": "nvmf_create_subsystem", 00:09:32.590 "req_id": 1 00:09:32.590 } 00:09:32.590 Got JSON-RPC error response 00:09:32.590 response: 00:09:32.590 { 00:09:32.590 "code": -32602, 00:09:32.590 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:32.590 }' 00:09:32.590 10:55:29 -- target/invalid.sh@46 -- # [[ request: 00:09:32.590 { 00:09:32.590 "nqn": "nqn.2016-06.io.spdk:cnode21537", 00:09:32.590 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:32.590 "method": "nvmf_create_subsystem", 00:09:32.590 "req_id": 1 00:09:32.590 } 00:09:32.590 Got JSON-RPC error response 00:09:32.590 response: 00:09:32.590 { 00:09:32.590 "code": -32602, 00:09:32.590 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:32.590 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:32.590 10:55:29 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:32.590 10:55:29 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31000 00:09:32.852 [2024-05-15 10:55:29.314685] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31000: invalid model number 'SPDK_Controller' 00:09:32.852 10:55:29 -- target/invalid.sh@50 -- # out='request: 00:09:32.852 { 00:09:32.852 "nqn": "nqn.2016-06.io.spdk:cnode31000", 00:09:32.852 "model_number": "SPDK_Controller\u001f", 00:09:32.852 "method": "nvmf_create_subsystem", 00:09:32.852 "req_id": 1 00:09:32.852 } 00:09:32.852 Got JSON-RPC error response 00:09:32.852 response: 00:09:32.852 { 00:09:32.852 "code": -32602, 00:09:32.852 "message": "Invalid MN SPDK_Controller\u001f" 00:09:32.852 }' 00:09:32.852 10:55:29 -- target/invalid.sh@51 -- # [[ request: 00:09:32.852 { 00:09:32.852 "nqn": "nqn.2016-06.io.spdk:cnode31000", 00:09:32.852 "model_number": "SPDK_Controller\u001f", 00:09:32.852 "method": "nvmf_create_subsystem", 00:09:32.852 "req_id": 1 00:09:32.852 } 00:09:32.852 Got JSON-RPC error response 00:09:32.852 response: 00:09:32.852 { 00:09:32.852 "code": -32602, 00:09:32.852 "message": "Invalid MN SPDK_Controller\u001f" 00:09:32.852 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:32.852 10:55:29 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:32.852 10:55:29 -- target/invalid.sh@19 -- # local length=21 ll 00:09:32.852 10:55:29 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:32.852 10:55:29 -- target/invalid.sh@21 -- # local chars 00:09:32.852 10:55:29 -- target/invalid.sh@22 -- # local string 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 59 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=';' 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 120 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=x 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 113 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=q 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 43 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=+ 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 51 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=3 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 75 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=K 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 51 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=3 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 76 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=L 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 41 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=')' 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 38 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+='&' 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 75 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=K 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 123 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+='{' 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 62 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+='>' 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 83 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=S 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 56 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=8 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # printf %x 97 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:32.852 10:55:29 -- target/invalid.sh@25 -- # string+=a 00:09:32.852 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # printf %x 50 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # string+=2 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # printf %x 120 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # string+=x 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # printf %x 62 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # string+='>' 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # printf %x 127 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # string+=$'\177' 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # printf %x 67 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:32.853 10:55:29 -- target/invalid.sh@25 -- # string+=C 00:09:32.853 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.114 10:55:29 -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:09:33.114 10:55:29 -- target/invalid.sh@31 -- # echo ';xq+3K3L)&K{>S8a2x>C' 00:09:33.114 10:55:29 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';xq+3K3L)&K{>S8a2x>C' nqn.2016-06.io.spdk:cnode16687 00:09:33.114 [2024-05-15 10:55:29.647755] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16687: invalid serial number ';xq+3K3L)&K{>S8a2x>C' 00:09:33.114 10:55:29 -- target/invalid.sh@54 -- # out='request: 00:09:33.114 { 00:09:33.114 "nqn": "nqn.2016-06.io.spdk:cnode16687", 00:09:33.114 "serial_number": ";xq+3K3L)&K{>S8a2x>\u007fC", 00:09:33.114 "method": "nvmf_create_subsystem", 00:09:33.114 "req_id": 1 00:09:33.114 } 00:09:33.114 Got JSON-RPC error response 00:09:33.114 response: 00:09:33.114 { 00:09:33.114 "code": -32602, 00:09:33.114 "message": "Invalid SN ;xq+3K3L)&K{>S8a2x>\u007fC" 00:09:33.114 }' 00:09:33.114 10:55:29 -- target/invalid.sh@55 -- # [[ request: 00:09:33.114 { 00:09:33.114 "nqn": "nqn.2016-06.io.spdk:cnode16687", 00:09:33.114 "serial_number": ";xq+3K3L)&K{>S8a2x>\u007fC", 00:09:33.114 "method": "nvmf_create_subsystem", 00:09:33.114 "req_id": 1 00:09:33.114 } 00:09:33.114 Got JSON-RPC error response 00:09:33.114 response: 00:09:33.114 { 00:09:33.114 "code": -32602, 00:09:33.114 "message": "Invalid SN ;xq+3K3L)&K{>S8a2x>\u007fC" 00:09:33.114 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:33.114 10:55:29 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:33.114 10:55:29 -- target/invalid.sh@19 -- # local length=41 ll 00:09:33.114 10:55:29 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:33.114 10:55:29 -- target/invalid.sh@21 -- # local chars 00:09:33.114 10:55:29 -- target/invalid.sh@22 -- # local string 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # printf %x 115 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # string+=s 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # printf %x 33 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # string+='!' 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # printf %x 78 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:33.114 10:55:29 -- target/invalid.sh@25 -- # string+=N 00:09:33.114 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 124 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+='|' 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 98 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+=b 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 95 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+=_ 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 122 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+=z 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 126 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+='~' 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 114 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+=r 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 39 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+=\' 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # printf %x 84 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:33.115 10:55:29 -- target/invalid.sh@25 -- # string+=T 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.115 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 57 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=9 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 79 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=O 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 97 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=a 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 124 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+='|' 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 123 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+='{' 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 114 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=r 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 52 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=4 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 98 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=b 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 90 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=Z 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 66 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=B 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 81 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=Q 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 53 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=5 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 97 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=a 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 57 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=9 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 117 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=u 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 80 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=P 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 95 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=_ 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 106 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=j 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 101 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=e 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 84 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=T 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 39 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=\' 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 56 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+=8 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 62 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # string+='>' 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.377 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # printf %x 40 00:09:33.377 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # string+='(' 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # printf %x 41 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # string+=')' 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # printf %x 101 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # string+=e 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # printf %x 63 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # string+='?' 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # printf %x 99 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # string+=c 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # printf %x 87 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # string+=W 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # printf %x 120 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:33.378 10:55:29 -- target/invalid.sh@25 -- # string+=x 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:33.378 10:55:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:33.378 10:55:29 -- target/invalid.sh@28 -- # [[ s == \- ]] 00:09:33.378 10:55:29 -- target/invalid.sh@31 -- # echo 's!N|b_z~r'\''T9Oa|{r4bZBQ5a9uP_jeT'\''8>()e?cWx' 00:09:33.378 10:55:29 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 's!N|b_z~r'\''T9Oa|{r4bZBQ5a9uP_jeT'\''8>()e?cWx' nqn.2016-06.io.spdk:cnode20729 00:09:33.640 [2024-05-15 10:55:30.125325] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20729: invalid model number 's!N|b_z~r'T9Oa|{r4bZBQ5a9uP_jeT'8>()e?cWx' 00:09:33.640 10:55:30 -- target/invalid.sh@58 -- # out='request: 00:09:33.640 { 00:09:33.640 "nqn": "nqn.2016-06.io.spdk:cnode20729", 00:09:33.640 "model_number": "s!N|b_z~r'\''T9Oa|{r4bZBQ5a9uP_jeT'\''8>()e?cWx", 00:09:33.640 "method": "nvmf_create_subsystem", 00:09:33.640 "req_id": 1 00:09:33.640 } 00:09:33.640 Got JSON-RPC error response 00:09:33.640 response: 00:09:33.640 { 00:09:33.640 "code": -32602, 00:09:33.640 "message": "Invalid MN s!N|b_z~r'\''T9Oa|{r4bZBQ5a9uP_jeT'\''8>()e?cWx" 00:09:33.640 }' 00:09:33.640 10:55:30 -- target/invalid.sh@59 -- # [[ request: 00:09:33.640 { 00:09:33.640 "nqn": "nqn.2016-06.io.spdk:cnode20729", 00:09:33.640 "model_number": "s!N|b_z~r'T9Oa|{r4bZBQ5a9uP_jeT'8>()e?cWx", 00:09:33.640 "method": "nvmf_create_subsystem", 00:09:33.640 "req_id": 1 00:09:33.640 } 00:09:33.640 Got JSON-RPC error response 00:09:33.640 response: 00:09:33.640 { 00:09:33.640 "code": -32602, 00:09:33.640 "message": "Invalid MN s!N|b_z~r'T9Oa|{r4bZBQ5a9uP_jeT'8>()e?cWx" 00:09:33.640 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:33.640 10:55:30 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:33.901 [2024-05-15 10:55:30.297927] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.901 10:55:30 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:33.901 10:55:30 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:33.901 10:55:30 -- target/invalid.sh@67 -- # echo '' 00:09:33.901 10:55:30 -- target/invalid.sh@67 -- # head -n 1 00:09:33.901 10:55:30 -- target/invalid.sh@67 -- # IP= 00:09:33.901 10:55:30 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:34.163 [2024-05-15 10:55:30.651014] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:34.163 [2024-05-15 10:55:30.651076] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:34.163 10:55:30 -- target/invalid.sh@69 -- # out='request: 00:09:34.163 { 00:09:34.163 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:34.163 "listen_address": { 00:09:34.163 "trtype": "tcp", 00:09:34.163 "traddr": "", 00:09:34.163 "trsvcid": "4421" 00:09:34.163 }, 00:09:34.163 "method": "nvmf_subsystem_remove_listener", 00:09:34.163 "req_id": 1 00:09:34.163 } 00:09:34.163 Got JSON-RPC error response 00:09:34.163 response: 00:09:34.163 { 00:09:34.163 "code": -32602, 00:09:34.163 "message": "Invalid parameters" 00:09:34.163 }' 00:09:34.163 10:55:30 -- target/invalid.sh@70 -- # [[ request: 00:09:34.163 { 00:09:34.163 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:34.163 "listen_address": { 00:09:34.163 "trtype": "tcp", 00:09:34.163 "traddr": "", 00:09:34.163 "trsvcid": "4421" 00:09:34.163 }, 00:09:34.163 "method": "nvmf_subsystem_remove_listener", 00:09:34.163 "req_id": 1 00:09:34.163 } 00:09:34.163 Got JSON-RPC error response 00:09:34.163 response: 00:09:34.163 { 00:09:34.163 "code": -32602, 00:09:34.163 "message": "Invalid parameters" 00:09:34.163 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:34.163 10:55:30 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10162 -i 0 00:09:34.424 [2024-05-15 10:55:30.823563] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10162: invalid cntlid range [0-65519] 00:09:34.424 10:55:30 -- target/invalid.sh@73 -- # out='request: 00:09:34.424 { 00:09:34.424 "nqn": "nqn.2016-06.io.spdk:cnode10162", 00:09:34.424 "min_cntlid": 0, 00:09:34.424 "method": "nvmf_create_subsystem", 00:09:34.424 "req_id": 1 00:09:34.424 } 00:09:34.424 Got JSON-RPC error response 00:09:34.424 response: 00:09:34.424 { 00:09:34.424 "code": -32602, 00:09:34.424 "message": "Invalid cntlid range [0-65519]" 00:09:34.424 }' 00:09:34.424 10:55:30 -- target/invalid.sh@74 -- # [[ request: 00:09:34.424 { 00:09:34.424 "nqn": "nqn.2016-06.io.spdk:cnode10162", 00:09:34.424 "min_cntlid": 0, 00:09:34.425 "method": "nvmf_create_subsystem", 00:09:34.425 "req_id": 1 00:09:34.425 } 00:09:34.425 Got JSON-RPC error response 00:09:34.425 response: 00:09:34.425 { 00:09:34.425 "code": -32602, 00:09:34.425 "message": "Invalid cntlid range [0-65519]" 00:09:34.425 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.425 10:55:30 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27009 -i 65520 00:09:34.425 [2024-05-15 10:55:30.996093] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27009: invalid cntlid range [65520-65519] 00:09:34.425 10:55:31 -- target/invalid.sh@75 -- # out='request: 00:09:34.425 { 00:09:34.425 "nqn": "nqn.2016-06.io.spdk:cnode27009", 00:09:34.425 "min_cntlid": 65520, 00:09:34.425 "method": "nvmf_create_subsystem", 00:09:34.425 "req_id": 1 00:09:34.425 } 00:09:34.425 Got JSON-RPC error response 00:09:34.425 response: 00:09:34.425 { 00:09:34.425 "code": -32602, 00:09:34.425 "message": "Invalid cntlid range [65520-65519]" 00:09:34.425 }' 00:09:34.425 10:55:31 -- target/invalid.sh@76 -- # [[ request: 00:09:34.425 { 00:09:34.425 "nqn": "nqn.2016-06.io.spdk:cnode27009", 00:09:34.425 "min_cntlid": 65520, 00:09:34.425 "method": "nvmf_create_subsystem", 00:09:34.425 "req_id": 1 00:09:34.425 } 00:09:34.425 Got JSON-RPC error response 00:09:34.425 response: 00:09:34.425 { 00:09:34.425 "code": -32602, 00:09:34.425 "message": "Invalid cntlid range [65520-65519]" 00:09:34.425 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.425 10:55:31 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7511 -I 0 00:09:34.686 [2024-05-15 10:55:31.160625] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7511: invalid cntlid range [1-0] 00:09:34.686 10:55:31 -- target/invalid.sh@77 -- # out='request: 00:09:34.686 { 00:09:34.686 "nqn": "nqn.2016-06.io.spdk:cnode7511", 00:09:34.686 "max_cntlid": 0, 00:09:34.686 "method": "nvmf_create_subsystem", 00:09:34.686 "req_id": 1 00:09:34.686 } 00:09:34.686 Got JSON-RPC error response 00:09:34.686 response: 00:09:34.686 { 00:09:34.686 "code": -32602, 00:09:34.686 "message": "Invalid cntlid range [1-0]" 00:09:34.686 }' 00:09:34.686 10:55:31 -- target/invalid.sh@78 -- # [[ request: 00:09:34.686 { 00:09:34.686 "nqn": "nqn.2016-06.io.spdk:cnode7511", 00:09:34.686 "max_cntlid": 0, 00:09:34.686 "method": "nvmf_create_subsystem", 00:09:34.686 "req_id": 1 00:09:34.686 } 00:09:34.686 Got JSON-RPC error response 00:09:34.686 response: 00:09:34.686 { 00:09:34.686 "code": -32602, 00:09:34.686 "message": "Invalid cntlid range [1-0]" 00:09:34.686 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.686 10:55:31 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18893 -I 65520 00:09:34.686 [2024-05-15 10:55:31.333153] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18893: invalid cntlid range [1-65520] 00:09:34.947 10:55:31 -- target/invalid.sh@79 -- # out='request: 00:09:34.947 { 00:09:34.947 "nqn": "nqn.2016-06.io.spdk:cnode18893", 00:09:34.947 "max_cntlid": 65520, 00:09:34.947 "method": "nvmf_create_subsystem", 00:09:34.947 "req_id": 1 00:09:34.947 } 00:09:34.947 Got JSON-RPC error response 00:09:34.947 response: 00:09:34.947 { 00:09:34.947 "code": -32602, 00:09:34.947 "message": "Invalid cntlid range [1-65520]" 00:09:34.947 }' 00:09:34.947 10:55:31 -- target/invalid.sh@80 -- # [[ request: 00:09:34.947 { 00:09:34.947 "nqn": "nqn.2016-06.io.spdk:cnode18893", 00:09:34.947 "max_cntlid": 65520, 00:09:34.947 "method": "nvmf_create_subsystem", 00:09:34.947 "req_id": 1 00:09:34.947 } 00:09:34.947 Got JSON-RPC error response 00:09:34.947 response: 00:09:34.947 { 00:09:34.947 "code": -32602, 00:09:34.947 "message": "Invalid cntlid range [1-65520]" 00:09:34.947 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.947 10:55:31 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29837 -i 6 -I 5 00:09:34.947 [2024-05-15 10:55:31.485646] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29837: invalid cntlid range [6-5] 00:09:34.947 10:55:31 -- target/invalid.sh@83 -- # out='request: 00:09:34.947 { 00:09:34.947 "nqn": "nqn.2016-06.io.spdk:cnode29837", 00:09:34.947 "min_cntlid": 6, 00:09:34.947 "max_cntlid": 5, 00:09:34.948 "method": "nvmf_create_subsystem", 00:09:34.948 "req_id": 1 00:09:34.948 } 00:09:34.948 Got JSON-RPC error response 00:09:34.948 response: 00:09:34.948 { 00:09:34.948 "code": -32602, 00:09:34.948 "message": "Invalid cntlid range [6-5]" 00:09:34.948 }' 00:09:34.948 10:55:31 -- target/invalid.sh@84 -- # [[ request: 00:09:34.948 { 00:09:34.948 "nqn": "nqn.2016-06.io.spdk:cnode29837", 00:09:34.948 "min_cntlid": 6, 00:09:34.948 "max_cntlid": 5, 00:09:34.948 "method": "nvmf_create_subsystem", 00:09:34.948 "req_id": 1 00:09:34.948 } 00:09:34.948 Got JSON-RPC error response 00:09:34.948 response: 00:09:34.948 { 00:09:34.948 "code": -32602, 00:09:34.948 "message": "Invalid cntlid range [6-5]" 00:09:34.948 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.948 10:55:31 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:35.210 10:55:31 -- target/invalid.sh@87 -- # out='request: 00:09:35.210 { 00:09:35.210 "name": "foobar", 00:09:35.210 "method": "nvmf_delete_target", 00:09:35.210 "req_id": 1 00:09:35.210 } 00:09:35.210 Got JSON-RPC error response 00:09:35.210 response: 00:09:35.210 { 00:09:35.210 "code": -32602, 00:09:35.210 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:35.210 }' 00:09:35.210 10:55:31 -- target/invalid.sh@88 -- # [[ request: 00:09:35.210 { 00:09:35.210 "name": "foobar", 00:09:35.210 "method": "nvmf_delete_target", 00:09:35.210 "req_id": 1 00:09:35.210 } 00:09:35.210 Got JSON-RPC error response 00:09:35.210 response: 00:09:35.210 { 00:09:35.210 "code": -32602, 00:09:35.210 "message": "The specified target doesn't exist, cannot delete it." 00:09:35.210 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:35.210 10:55:31 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:35.210 10:55:31 -- target/invalid.sh@91 -- # nvmftestfini 00:09:35.210 10:55:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:35.210 10:55:31 -- nvmf/common.sh@117 -- # sync 00:09:35.210 10:55:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.210 10:55:31 -- nvmf/common.sh@120 -- # set +e 00:09:35.210 10:55:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.210 10:55:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.210 rmmod nvme_tcp 00:09:35.210 rmmod nvme_fabrics 00:09:35.210 rmmod nvme_keyring 00:09:35.210 10:55:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.210 10:55:31 -- nvmf/common.sh@124 -- # set -e 00:09:35.210 10:55:31 -- nvmf/common.sh@125 -- # return 0 00:09:35.210 10:55:31 -- nvmf/common.sh@478 -- # '[' -n 205882 ']' 00:09:35.210 10:55:31 -- nvmf/common.sh@479 -- # killprocess 205882 00:09:35.210 10:55:31 -- common/autotest_common.sh@946 -- # '[' -z 205882 ']' 00:09:35.210 10:55:31 -- common/autotest_common.sh@950 -- # kill -0 205882 00:09:35.210 10:55:31 -- common/autotest_common.sh@951 -- # uname 00:09:35.210 10:55:31 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:35.210 10:55:31 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 205882 00:09:35.210 10:55:31 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:35.210 10:55:31 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:35.210 10:55:31 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 205882' 00:09:35.210 killing process with pid 205882 00:09:35.210 10:55:31 -- common/autotest_common.sh@965 -- # kill 205882 00:09:35.210 [2024-05-15 10:55:31.734101] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:35.210 10:55:31 -- common/autotest_common.sh@970 -- # wait 205882 00:09:35.472 10:55:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:35.472 10:55:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:35.472 10:55:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:35.472 10:55:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.472 10:55:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.472 10:55:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.472 10:55:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.472 10:55:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.387 10:55:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:37.387 00:09:37.387 real 0m13.096s 00:09:37.387 user 0m19.020s 00:09:37.387 sys 0m6.048s 00:09:37.387 10:55:33 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:37.387 10:55:33 -- common/autotest_common.sh@10 -- # set +x 00:09:37.387 ************************************ 00:09:37.387 END TEST nvmf_invalid 00:09:37.387 ************************************ 00:09:37.387 10:55:33 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:37.387 10:55:33 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:37.387 10:55:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.387 10:55:33 -- common/autotest_common.sh@10 -- # set +x 00:09:37.387 ************************************ 00:09:37.387 START TEST nvmf_abort 00:09:37.387 ************************************ 00:09:37.387 10:55:34 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:37.650 * Looking for test storage... 00:09:37.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.650 10:55:34 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.650 10:55:34 -- nvmf/common.sh@7 -- # uname -s 00:09:37.650 10:55:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.650 10:55:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.650 10:55:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.650 10:55:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.650 10:55:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.650 10:55:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.650 10:55:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.650 10:55:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.650 10:55:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.650 10:55:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.650 10:55:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:37.650 10:55:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:37.650 10:55:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.650 10:55:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.650 10:55:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.650 10:55:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.650 10:55:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.650 10:55:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.650 10:55:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.650 10:55:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.650 10:55:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.650 10:55:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.650 10:55:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.650 10:55:34 -- paths/export.sh@5 -- # export PATH 00:09:37.650 10:55:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.650 10:55:34 -- nvmf/common.sh@47 -- # : 0 00:09:37.650 10:55:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.650 10:55:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.650 10:55:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.650 10:55:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.650 10:55:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.650 10:55:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.650 10:55:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.650 10:55:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.650 10:55:34 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.650 10:55:34 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:37.650 10:55:34 -- target/abort.sh@14 -- # nvmftestinit 00:09:37.650 10:55:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:37.650 10:55:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.650 10:55:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:37.650 10:55:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:37.650 10:55:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:37.650 10:55:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.650 10:55:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.650 10:55:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.650 10:55:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:37.650 10:55:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:37.650 10:55:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:37.650 10:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 10:55:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:44.246 10:55:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:44.246 10:55:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:44.246 10:55:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:44.246 10:55:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:44.246 10:55:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:44.246 10:55:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:44.246 10:55:40 -- nvmf/common.sh@295 -- # net_devs=() 00:09:44.246 10:55:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:44.246 10:55:40 -- nvmf/common.sh@296 -- # e810=() 00:09:44.246 10:55:40 -- nvmf/common.sh@296 -- # local -ga e810 00:09:44.246 10:55:40 -- nvmf/common.sh@297 -- # x722=() 00:09:44.246 10:55:40 -- nvmf/common.sh@297 -- # local -ga x722 00:09:44.246 10:55:40 -- nvmf/common.sh@298 -- # mlx=() 00:09:44.246 10:55:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:44.246 10:55:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.246 10:55:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:44.246 10:55:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:44.246 10:55:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:44.246 10:55:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:44.246 10:55:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:44.246 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:44.246 10:55:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:44.246 10:55:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:44.246 10:55:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:44.246 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:44.246 10:55:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:44.247 10:55:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:44.247 10:55:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.247 10:55:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:44.247 10:55:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.247 10:55:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:44.247 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:44.247 10:55:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.247 10:55:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:44.247 10:55:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.247 10:55:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:44.247 10:55:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.247 10:55:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:44.247 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:44.247 10:55:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.247 10:55:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:44.247 10:55:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:44.247 10:55:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:44.247 10:55:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:44.247 10:55:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.247 10:55:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.247 10:55:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.247 10:55:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:44.247 10:55:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.247 10:55:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.247 10:55:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:44.247 10:55:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.247 10:55:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.247 10:55:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:44.247 10:55:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:44.247 10:55:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.247 10:55:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.247 10:55:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.247 10:55:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.247 10:55:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:44.247 10:55:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.508 10:55:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.508 10:55:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.508 10:55:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:44.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:09:44.508 00:09:44.508 --- 10.0.0.2 ping statistics --- 00:09:44.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.508 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:09:44.508 10:55:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:09:44.508 00:09:44.508 --- 10.0.0.1 ping statistics --- 00:09:44.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.508 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:44.508 10:55:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.508 10:55:41 -- nvmf/common.sh@411 -- # return 0 00:09:44.508 10:55:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:44.508 10:55:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.508 10:55:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:44.508 10:55:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:44.508 10:55:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.508 10:55:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:44.508 10:55:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:44.508 10:55:41 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:44.508 10:55:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:44.508 10:55:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:44.508 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:44.508 10:55:41 -- nvmf/common.sh@470 -- # nvmfpid=210952 00:09:44.508 10:55:41 -- nvmf/common.sh@471 -- # waitforlisten 210952 00:09:44.508 10:55:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:44.508 10:55:41 -- common/autotest_common.sh@827 -- # '[' -z 210952 ']' 00:09:44.508 10:55:41 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.508 10:55:41 -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:44.508 10:55:41 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.508 10:55:41 -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:44.508 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:44.508 [2024-05-15 10:55:41.118351] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:09:44.508 [2024-05-15 10:55:41.118414] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.508 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.769 [2024-05-15 10:55:41.205293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.769 [2024-05-15 10:55:41.303165] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.769 [2024-05-15 10:55:41.303221] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.769 [2024-05-15 10:55:41.303229] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.769 [2024-05-15 10:55:41.303237] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.769 [2024-05-15 10:55:41.303247] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.769 [2024-05-15 10:55:41.303379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.769 [2024-05-15 10:55:41.303552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.769 [2024-05-15 10:55:41.303566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.338 10:55:41 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:45.338 10:55:41 -- common/autotest_common.sh@860 -- # return 0 00:09:45.338 10:55:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:45.338 10:55:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.338 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:45.338 10:55:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.338 10:55:41 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:45.338 10:55:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.338 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:45.339 [2024-05-15 10:55:41.949661] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.339 10:55:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.339 10:55:41 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:45.339 10:55:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.339 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:45.339 Malloc0 00:09:45.339 10:55:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.339 10:55:41 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:45.339 10:55:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.339 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:45.599 Delay0 00:09:45.599 10:55:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.599 10:55:41 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.599 10:55:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.599 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:45.599 10:55:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.599 10:55:42 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:45.599 10:55:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.599 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:09:45.599 10:55:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.599 10:55:42 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:45.599 10:55:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.599 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:09:45.599 [2024-05-15 10:55:42.015303] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:45.599 [2024-05-15 10:55:42.015525] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.599 10:55:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.599 10:55:42 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:45.599 10:55:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.599 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:09:45.599 10:55:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.599 10:55:42 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:45.599 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.599 [2024-05-15 10:55:42.164867] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:48.143 Initializing NVMe Controllers 00:09:48.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:48.143 controller IO queue size 128 less than required 00:09:48.143 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:48.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:48.143 Initialization complete. Launching workers. 00:09:48.143 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36793 00:09:48.143 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36854, failed to submit 62 00:09:48.143 success 36797, unsuccess 57, failed 0 00:09:48.143 10:55:44 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:48.143 10:55:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.143 10:55:44 -- common/autotest_common.sh@10 -- # set +x 00:09:48.143 10:55:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.143 10:55:44 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:48.143 10:55:44 -- target/abort.sh@38 -- # nvmftestfini 00:09:48.143 10:55:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:48.143 10:55:44 -- nvmf/common.sh@117 -- # sync 00:09:48.143 10:55:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.143 10:55:44 -- nvmf/common.sh@120 -- # set +e 00:09:48.143 10:55:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.143 10:55:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.143 rmmod nvme_tcp 00:09:48.143 rmmod nvme_fabrics 00:09:48.143 rmmod nvme_keyring 00:09:48.143 10:55:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.143 10:55:44 -- nvmf/common.sh@124 -- # set -e 00:09:48.143 10:55:44 -- nvmf/common.sh@125 -- # return 0 00:09:48.143 10:55:44 -- nvmf/common.sh@478 -- # '[' -n 210952 ']' 00:09:48.143 10:55:44 -- nvmf/common.sh@479 -- # killprocess 210952 00:09:48.143 10:55:44 -- common/autotest_common.sh@946 -- # '[' -z 210952 ']' 00:09:48.143 10:55:44 -- common/autotest_common.sh@950 -- # kill -0 210952 00:09:48.143 10:55:44 -- common/autotest_common.sh@951 -- # uname 00:09:48.143 10:55:44 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:48.143 10:55:44 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 210952 00:09:48.143 10:55:44 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:48.143 10:55:44 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:48.143 10:55:44 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 210952' 00:09:48.143 killing process with pid 210952 00:09:48.143 10:55:44 -- common/autotest_common.sh@965 -- # kill 210952 00:09:48.143 [2024-05-15 10:55:44.366448] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:48.143 10:55:44 -- common/autotest_common.sh@970 -- # wait 210952 00:09:48.143 10:55:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:48.143 10:55:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:48.143 10:55:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:48.143 10:55:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.143 10:55:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.143 10:55:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.143 10:55:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.143 10:55:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.058 10:55:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.058 00:09:50.058 real 0m12.542s 00:09:50.058 user 0m13.675s 00:09:50.058 sys 0m5.646s 00:09:50.058 10:55:46 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:50.058 10:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:50.058 ************************************ 00:09:50.058 END TEST nvmf_abort 00:09:50.058 ************************************ 00:09:50.058 10:55:46 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:50.058 10:55:46 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:50.058 10:55:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.058 10:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:50.058 ************************************ 00:09:50.058 START TEST nvmf_ns_hotplug_stress 00:09:50.058 ************************************ 00:09:50.058 10:55:46 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:50.319 * Looking for test storage... 00:09:50.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.319 10:55:46 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.319 10:55:46 -- nvmf/common.sh@7 -- # uname -s 00:09:50.319 10:55:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.320 10:55:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.320 10:55:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.320 10:55:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.320 10:55:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.320 10:55:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.320 10:55:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.320 10:55:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.320 10:55:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.320 10:55:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.320 10:55:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.320 10:55:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.320 10:55:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.320 10:55:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.320 10:55:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.320 10:55:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.320 10:55:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.320 10:55:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.320 10:55:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.320 10:55:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.320 10:55:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.320 10:55:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.320 10:55:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.320 10:55:46 -- paths/export.sh@5 -- # export PATH 00:09:50.320 10:55:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.320 10:55:46 -- nvmf/common.sh@47 -- # : 0 00:09:50.320 10:55:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.320 10:55:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.320 10:55:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.320 10:55:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.320 10:55:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.320 10:55:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.320 10:55:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.320 10:55:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.320 10:55:46 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.320 10:55:46 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:50.320 10:55:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:50.320 10:55:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.320 10:55:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:50.320 10:55:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:50.320 10:55:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:50.320 10:55:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.320 10:55:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.320 10:55:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.320 10:55:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:50.320 10:55:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:50.320 10:55:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.320 10:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:56.912 10:55:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:56.912 10:55:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.912 10:55:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.912 10:55:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.912 10:55:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.912 10:55:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.912 10:55:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.912 10:55:53 -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.912 10:55:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.912 10:55:53 -- nvmf/common.sh@296 -- # e810=() 00:09:56.912 10:55:53 -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.912 10:55:53 -- nvmf/common.sh@297 -- # x722=() 00:09:56.912 10:55:53 -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.912 10:55:53 -- nvmf/common.sh@298 -- # mlx=() 00:09:56.912 10:55:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.912 10:55:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.912 10:55:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.912 10:55:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.912 10:55:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.912 10:55:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.912 10:55:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:56.912 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:56.912 10:55:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.912 10:55:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:56.912 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:56.912 10:55:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.912 10:55:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.912 10:55:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.912 10:55:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:56.912 10:55:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.912 10:55:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:56.912 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:56.912 10:55:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.912 10:55:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.912 10:55:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.912 10:55:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:56.912 10:55:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.912 10:55:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:56.912 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:56.912 10:55:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.912 10:55:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:56.912 10:55:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:56.912 10:55:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:56.912 10:55:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:56.912 10:55:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.912 10:55:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.912 10:55:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.912 10:55:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.912 10:55:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.912 10:55:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.912 10:55:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.912 10:55:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.912 10:55:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.912 10:55:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.912 10:55:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.912 10:55:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.912 10:55:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.173 10:55:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.173 10:55:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.173 10:55:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:57.173 10:55:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.173 10:55:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.173 10:55:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.173 10:55:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:57.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:09:57.173 00:09:57.173 --- 10.0.0.2 ping statistics --- 00:09:57.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.173 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:09:57.173 10:55:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:09:57.173 00:09:57.173 --- 10.0.0.1 ping statistics --- 00:09:57.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.173 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:57.173 10:55:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.173 10:55:53 -- nvmf/common.sh@411 -- # return 0 00:09:57.173 10:55:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:57.173 10:55:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.173 10:55:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:57.173 10:55:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:57.173 10:55:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.173 10:55:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:57.173 10:55:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:57.173 10:55:53 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:57.173 10:55:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:57.173 10:55:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:57.174 10:55:53 -- common/autotest_common.sh@10 -- # set +x 00:09:57.174 10:55:53 -- nvmf/common.sh@470 -- # nvmfpid=215742 00:09:57.174 10:55:53 -- nvmf/common.sh@471 -- # waitforlisten 215742 00:09:57.174 10:55:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:57.174 10:55:53 -- common/autotest_common.sh@827 -- # '[' -z 215742 ']' 00:09:57.174 10:55:53 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.174 10:55:53 -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:57.174 10:55:53 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.174 10:55:53 -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:57.174 10:55:53 -- common/autotest_common.sh@10 -- # set +x 00:09:57.174 [2024-05-15 10:55:53.814460] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:09:57.174 [2024-05-15 10:55:53.814515] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.435 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.435 [2024-05-15 10:55:53.896173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.435 [2024-05-15 10:55:53.973770] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.435 [2024-05-15 10:55:53.973816] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.435 [2024-05-15 10:55:53.973824] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.435 [2024-05-15 10:55:53.973830] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.435 [2024-05-15 10:55:53.973836] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.435 [2024-05-15 10:55:53.973953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.435 [2024-05-15 10:55:53.974112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.435 [2024-05-15 10:55:53.974114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.005 10:55:54 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:58.005 10:55:54 -- common/autotest_common.sh@860 -- # return 0 00:09:58.005 10:55:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:58.005 10:55:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.005 10:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:58.005 10:55:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.005 10:55:54 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:58.005 10:55:54 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:58.267 [2024-05-15 10:55:54.765821] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.267 10:55:54 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:58.529 10:55:54 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.529 [2024-05-15 10:55:55.103055] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:58.529 [2024-05-15 10:55:55.103286] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.529 10:55:55 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.790 10:55:55 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:59.052 Malloc0 00:09:59.052 10:55:55 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:59.052 Delay0 00:09:59.052 10:55:55 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.313 10:55:55 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:59.313 NULL1 00:09:59.573 10:55:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:59.573 10:55:56 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:59.573 10:55:56 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=216152 00:09:59.573 10:55:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:09:59.573 10:55:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.573 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.834 10:55:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.834 10:55:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:59.834 10:55:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:00.094 [2024-05-15 10:55:56.619880] bdev.c:4975:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:00.094 true 00:10:00.094 10:55:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:00.094 10:55:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.354 10:55:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.354 10:55:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:00.354 10:55:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:00.615 true 00:10:00.615 10:55:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:00.615 10:55:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.876 10:55:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.876 10:55:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:00.876 10:55:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:01.137 true 00:10:01.137 10:55:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:01.137 10:55:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.398 10:55:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.398 10:55:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:01.398 10:55:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:01.660 true 00:10:01.660 10:55:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:01.660 10:55:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.660 10:55:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.921 10:55:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:01.921 10:55:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:02.181 true 00:10:02.181 10:55:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:02.181 10:55:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.181 10:55:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.443 10:55:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:02.443 10:55:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:02.704 true 00:10:02.704 10:55:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:02.704 10:55:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.704 10:55:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.966 10:55:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:02.966 10:55:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:02.966 true 00:10:03.227 10:55:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:03.227 10:55:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.227 10:55:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.488 10:55:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:03.488 10:55:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:03.488 true 00:10:03.488 10:56:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:03.488 10:56:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.749 10:56:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.011 10:56:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:04.011 10:56:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:04.011 true 00:10:04.011 10:56:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:04.011 10:56:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.273 10:56:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.534 10:56:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:04.534 10:56:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:04.534 true 00:10:04.534 10:56:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:04.534 10:56:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.794 10:56:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.055 10:56:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:05.055 10:56:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:05.055 true 00:10:05.055 10:56:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:05.055 10:56:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.316 10:56:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.578 10:56:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:05.578 10:56:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:05.578 true 00:10:05.578 10:56:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:05.578 10:56:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.846 10:56:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.846 10:56:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:05.846 10:56:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:06.106 true 00:10:06.106 10:56:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:06.106 10:56:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.368 10:56:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.368 10:56:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:06.368 10:56:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:06.629 true 00:10:06.629 10:56:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:06.629 10:56:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.889 10:56:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.889 10:56:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:06.889 10:56:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:07.150 true 00:10:07.150 10:56:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:07.151 10:56:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.411 10:56:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.411 10:56:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:07.411 10:56:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:07.672 true 00:10:07.672 10:56:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:07.672 10:56:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.672 10:56:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.933 10:56:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:07.933 10:56:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:08.194 true 00:10:08.194 10:56:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:08.194 10:56:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.194 10:56:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.455 10:56:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:08.455 10:56:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:08.455 true 00:10:08.455 10:56:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:08.455 10:56:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.716 10:56:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.977 10:56:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:08.977 10:56:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:08.977 true 00:10:08.977 10:56:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:08.977 10:56:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.238 10:56:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.500 10:56:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:09.500 10:56:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:09.500 true 00:10:09.500 10:56:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:09.500 10:56:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.760 10:56:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.760 10:56:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:09.760 10:56:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:10.022 true 00:10:10.022 10:56:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:10.022 10:56:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.283 10:56:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.283 10:56:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:10.283 10:56:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:10.544 true 00:10:10.544 10:56:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:10.544 10:56:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.805 10:56:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.805 10:56:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:10.805 10:56:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:11.066 true 00:10:11.066 10:56:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:11.066 10:56:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.066 10:56:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.327 10:56:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:11.327 10:56:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:11.589 true 00:10:11.589 10:56:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:11.589 10:56:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.589 10:56:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.849 10:56:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:11.849 10:56:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:12.110 true 00:10:12.110 10:56:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:12.110 10:56:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.110 10:56:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.371 10:56:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:12.371 10:56:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:12.631 true 00:10:12.631 10:56:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:12.631 10:56:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.631 10:56:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.892 10:56:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:12.892 10:56:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:12.892 true 00:10:13.153 10:56:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:13.153 10:56:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.153 10:56:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.414 10:56:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:13.415 10:56:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:13.415 true 00:10:13.415 10:56:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:13.415 10:56:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.675 10:56:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.936 10:56:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:13.936 10:56:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:13.936 true 00:10:13.936 10:56:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:13.936 10:56:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.197 10:56:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.458 10:56:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:14.458 10:56:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:14.458 true 00:10:14.458 10:56:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:14.458 10:56:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.718 10:56:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.978 10:56:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:14.978 10:56:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:14.978 true 00:10:14.978 10:56:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:14.978 10:56:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.238 10:56:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.498 10:56:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:15.498 10:56:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:15.498 true 00:10:15.498 10:56:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:15.498 10:56:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.758 10:56:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.018 10:56:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:16.018 10:56:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:16.018 true 00:10:16.018 10:56:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:16.018 10:56:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.279 10:56:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.279 10:56:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:16.279 10:56:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:16.539 true 00:10:16.539 10:56:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:16.539 10:56:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.799 10:56:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.799 10:56:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:16.799 10:56:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:17.059 true 00:10:17.059 10:56:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:17.059 10:56:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.319 10:56:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.319 10:56:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:17.319 10:56:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:17.579 true 00:10:17.579 10:56:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:17.579 10:56:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.839 10:56:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.839 10:56:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:17.839 10:56:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:18.100 true 00:10:18.100 10:56:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:18.100 10:56:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.360 10:56:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.360 10:56:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:18.360 10:56:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:18.620 true 00:10:18.620 10:56:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:18.620 10:56:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.620 10:56:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.881 10:56:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:18.881 10:56:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:19.141 true 00:10:19.141 10:56:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:19.141 10:56:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.141 10:56:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.401 10:56:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:19.401 10:56:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:19.662 true 00:10:19.662 10:56:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:19.662 10:56:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.662 10:56:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.922 10:56:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:19.922 10:56:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:20.183 true 00:10:20.183 10:56:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:20.183 10:56:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.183 10:56:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.443 10:56:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:20.443 10:56:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:20.702 true 00:10:20.702 10:56:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:20.702 10:56:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.702 10:56:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.962 10:56:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:20.962 10:56:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:20.962 true 00:10:21.221 10:56:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:21.221 10:56:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.221 10:56:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.481 10:56:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:21.481 10:56:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:21.481 true 00:10:21.481 10:56:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:21.481 10:56:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.742 10:56:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.002 10:56:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:22.002 10:56:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:22.002 true 00:10:22.002 10:56:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:22.002 10:56:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.262 10:56:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.523 10:56:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:22.523 10:56:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:22.523 true 00:10:22.523 10:56:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:22.523 10:56:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.785 10:56:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.785 10:56:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:22.785 10:56:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:23.045 true 00:10:23.045 10:56:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:23.045 10:56:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.306 10:56:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.306 10:56:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:23.306 10:56:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:23.566 true 00:10:23.566 10:56:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:23.566 10:56:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.827 10:56:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.827 10:56:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:23.827 10:56:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:24.087 true 00:10:24.087 10:56:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:24.087 10:56:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.348 10:56:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.348 10:56:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:24.348 10:56:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:24.608 true 00:10:24.608 10:56:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:24.608 10:56:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.868 10:56:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.868 10:56:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:24.868 10:56:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:25.129 true 00:10:25.129 10:56:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:25.129 10:56:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.390 10:56:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.390 10:56:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:25.390 10:56:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:25.650 true 00:10:25.650 10:56:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:25.650 10:56:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.910 10:56:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.910 10:56:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:25.910 10:56:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:26.171 true 00:10:26.171 10:56:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:26.171 10:56:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.433 10:56:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.433 10:56:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:26.433 10:56:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:26.693 true 00:10:26.693 10:56:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:26.693 10:56:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.953 10:56:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.953 10:56:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:26.953 10:56:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:27.213 true 00:10:27.213 10:56:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:27.213 10:56:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.472 10:56:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.472 10:56:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:27.472 10:56:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:27.733 true 00:10:27.733 10:56:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:27.733 10:56:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.733 10:56:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.993 10:56:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:27.993 10:56:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:28.254 true 00:10:28.254 10:56:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:28.254 10:56:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.254 10:56:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.515 10:56:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:28.515 10:56:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:28.515 true 00:10:28.775 10:56:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:28.775 10:56:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.775 10:56:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.036 10:56:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:29.036 10:56:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:29.036 true 00:10:29.036 10:56:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:29.036 10:56:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.297 10:56:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.559 10:56:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:10:29.559 10:56:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:29.559 true 00:10:29.559 10:56:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:29.559 10:56:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.820 10:56:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.081 Initializing NVMe Controllers 00:10:30.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.081 Controller IO queue size 128, less than required. 00:10:30.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:30.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:30.081 Initialization complete. Launching workers. 00:10:30.081 ======================================================== 00:10:30.081 Latency(us) 00:10:30.081 Device Information : IOPS MiB/s Average min max 00:10:30.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32136.23 15.69 3982.94 1393.42 7663.81 00:10:30.081 ======================================================== 00:10:30.081 Total : 32136.23 15.69 3982.94 1393.42 7663.81 00:10:30.081 00:10:30.081 10:56:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:10:30.081 10:56:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:10:30.081 true 00:10:30.081 10:56:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 216152 00:10:30.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (216152) - No such process 00:10:30.081 10:56:26 -- target/ns_hotplug_stress.sh@53 -- # wait 216152 00:10:30.081 10:56:26 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.342 10:56:26 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:30.604 null0 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:30.604 10:56:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:30.865 null1 00:10:30.865 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:30.865 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:30.865 10:56:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:31.127 null2 00:10:31.127 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:31.127 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:31.127 10:56:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:31.127 null3 00:10:31.127 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:31.127 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:31.127 10:56:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:31.388 null4 00:10:31.388 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:31.388 10:56:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:31.388 10:56:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:31.388 null5 00:10:31.388 10:56:28 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:31.388 10:56:28 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:31.388 10:56:28 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:31.649 null6 00:10:31.649 10:56:28 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:31.649 10:56:28 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:31.649 10:56:28 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:31.910 null7 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.910 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@66 -- # wait 222877 222880 222882 222885 222888 222892 222894 222897 00:10:31.911 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.172 10:56:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.433 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.433 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.434 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.434 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.434 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.434 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.434 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.434 10:56:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.434 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.434 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.434 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.434 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.434 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.434 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.694 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.695 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.956 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.957 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:33.218 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.480 10:56:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:33.480 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.480 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.480 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:33.480 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:33.480 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.480 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.480 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.742 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.002 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.003 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.263 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.523 10:56:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.523 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.783 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.044 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.305 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.306 10:56:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.567 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.567 10:56:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:35.567 10:56:32 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:35.567 10:56:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:35.567 10:56:32 -- nvmf/common.sh@117 -- # sync 00:10:35.567 10:56:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.567 10:56:32 -- nvmf/common.sh@120 -- # set +e 00:10:35.567 10:56:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.567 10:56:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.829 rmmod nvme_tcp 00:10:35.829 rmmod nvme_fabrics 00:10:35.829 rmmod nvme_keyring 00:10:35.829 10:56:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.829 10:56:32 -- nvmf/common.sh@124 -- # set -e 00:10:35.829 10:56:32 -- nvmf/common.sh@125 -- # return 0 00:10:35.829 10:56:32 -- nvmf/common.sh@478 -- # '[' -n 215742 ']' 00:10:35.829 10:56:32 -- nvmf/common.sh@479 -- # killprocess 215742 00:10:35.829 10:56:32 -- common/autotest_common.sh@946 -- # '[' -z 215742 ']' 00:10:35.829 10:56:32 -- common/autotest_common.sh@950 -- # kill -0 215742 00:10:35.829 10:56:32 -- common/autotest_common.sh@951 -- # uname 00:10:35.829 10:56:32 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:35.829 10:56:32 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 215742 00:10:35.829 10:56:32 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:35.829 10:56:32 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:35.829 10:56:32 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 215742' 00:10:35.829 killing process with pid 215742 00:10:35.829 10:56:32 -- common/autotest_common.sh@965 -- # kill 215742 00:10:35.829 [2024-05-15 10:56:32.357787] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:35.829 10:56:32 -- common/autotest_common.sh@970 -- # wait 215742 00:10:35.829 10:56:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:35.829 10:56:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:35.829 10:56:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:35.829 10:56:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.829 10:56:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.829 10:56:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.829 10:56:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.829 10:56:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.379 10:56:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.379 00:10:38.379 real 0m47.892s 00:10:38.379 user 3m17.948s 00:10:38.379 sys 0m16.336s 00:10:38.379 10:56:34 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.379 10:56:34 -- common/autotest_common.sh@10 -- # set +x 00:10:38.379 ************************************ 00:10:38.379 END TEST nvmf_ns_hotplug_stress 00:10:38.379 ************************************ 00:10:38.379 10:56:34 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:38.379 10:56:34 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:38.379 10:56:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.379 10:56:34 -- common/autotest_common.sh@10 -- # set +x 00:10:38.379 ************************************ 00:10:38.379 START TEST nvmf_connect_stress 00:10:38.379 ************************************ 00:10:38.379 10:56:34 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:38.379 * Looking for test storage... 00:10:38.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.379 10:56:34 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.379 10:56:34 -- nvmf/common.sh@7 -- # uname -s 00:10:38.379 10:56:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.379 10:56:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.379 10:56:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.379 10:56:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.379 10:56:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.379 10:56:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.379 10:56:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.379 10:56:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.379 10:56:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.379 10:56:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.379 10:56:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:38.379 10:56:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:38.379 10:56:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.379 10:56:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.379 10:56:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.379 10:56:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.379 10:56:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.379 10:56:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.379 10:56:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.379 10:56:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.379 10:56:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.379 10:56:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.379 10:56:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.379 10:56:34 -- paths/export.sh@5 -- # export PATH 00:10:38.379 10:56:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.379 10:56:34 -- nvmf/common.sh@47 -- # : 0 00:10:38.379 10:56:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.379 10:56:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.379 10:56:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.379 10:56:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.379 10:56:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.379 10:56:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.379 10:56:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.379 10:56:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.379 10:56:34 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:38.379 10:56:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:38.379 10:56:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.379 10:56:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:38.379 10:56:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:38.379 10:56:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:38.379 10:56:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.379 10:56:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.379 10:56:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.379 10:56:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:38.379 10:56:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:38.379 10:56:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:38.379 10:56:34 -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 10:56:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:44.974 10:56:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:44.974 10:56:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:44.974 10:56:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:44.974 10:56:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:44.974 10:56:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:44.974 10:56:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:44.974 10:56:41 -- nvmf/common.sh@295 -- # net_devs=() 00:10:44.974 10:56:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:44.974 10:56:41 -- nvmf/common.sh@296 -- # e810=() 00:10:44.974 10:56:41 -- nvmf/common.sh@296 -- # local -ga e810 00:10:44.974 10:56:41 -- nvmf/common.sh@297 -- # x722=() 00:10:44.974 10:56:41 -- nvmf/common.sh@297 -- # local -ga x722 00:10:44.974 10:56:41 -- nvmf/common.sh@298 -- # mlx=() 00:10:44.974 10:56:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:44.974 10:56:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.974 10:56:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:44.974 10:56:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:44.974 10:56:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:44.974 10:56:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.974 10:56:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:44.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:44.974 10:56:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.974 10:56:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:44.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:44.974 10:56:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:44.974 10:56:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.974 10:56:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.974 10:56:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:44.974 10:56:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.974 10:56:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:44.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:44.974 10:56:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.974 10:56:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.974 10:56:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.974 10:56:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:44.974 10:56:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.974 10:56:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:44.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:44.974 10:56:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.974 10:56:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:44.974 10:56:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:44.974 10:56:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:44.974 10:56:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:44.974 10:56:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.974 10:56:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.974 10:56:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.974 10:56:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:44.974 10:56:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.974 10:56:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.974 10:56:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:44.974 10:56:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.975 10:56:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.975 10:56:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:44.975 10:56:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:44.975 10:56:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.975 10:56:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.975 10:56:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.975 10:56:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.975 10:56:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:44.975 10:56:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.237 10:56:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.237 10:56:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.237 10:56:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:45.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:10:45.237 00:10:45.237 --- 10.0.0.2 ping statistics --- 00:10:45.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.237 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:10:45.237 10:56:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:10:45.237 00:10:45.237 --- 10.0.0.1 ping statistics --- 00:10:45.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.237 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:10:45.237 10:56:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.237 10:56:41 -- nvmf/common.sh@411 -- # return 0 00:10:45.237 10:56:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:45.237 10:56:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.237 10:56:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:45.237 10:56:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:45.237 10:56:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.237 10:56:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:45.237 10:56:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:45.237 10:56:41 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:45.237 10:56:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:45.237 10:56:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:45.237 10:56:41 -- common/autotest_common.sh@10 -- # set +x 00:10:45.237 10:56:41 -- nvmf/common.sh@470 -- # nvmfpid=227868 00:10:45.237 10:56:41 -- nvmf/common.sh@471 -- # waitforlisten 227868 00:10:45.237 10:56:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:45.237 10:56:41 -- common/autotest_common.sh@827 -- # '[' -z 227868 ']' 00:10:45.237 10:56:41 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.237 10:56:41 -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:45.237 10:56:41 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.237 10:56:41 -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:45.237 10:56:41 -- common/autotest_common.sh@10 -- # set +x 00:10:45.237 [2024-05-15 10:56:41.811159] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:10:45.237 [2024-05-15 10:56:41.811220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.237 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.498 [2024-05-15 10:56:41.898878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:45.498 [2024-05-15 10:56:41.992297] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.498 [2024-05-15 10:56:41.992353] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.498 [2024-05-15 10:56:41.992362] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.498 [2024-05-15 10:56:41.992369] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.498 [2024-05-15 10:56:41.992376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.498 [2024-05-15 10:56:41.992507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.498 [2024-05-15 10:56:41.992673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.498 [2024-05-15 10:56:41.992860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.071 10:56:42 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:46.071 10:56:42 -- common/autotest_common.sh@860 -- # return 0 00:10:46.071 10:56:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:46.071 10:56:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.071 10:56:42 -- common/autotest_common.sh@10 -- # set +x 00:10:46.071 10:56:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.071 10:56:42 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.071 10:56:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.071 10:56:42 -- common/autotest_common.sh@10 -- # set +x 00:10:46.071 [2024-05-15 10:56:42.642178] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.071 10:56:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.071 10:56:42 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:46.071 10:56:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.071 10:56:42 -- common/autotest_common.sh@10 -- # set +x 00:10:46.071 10:56:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.071 10:56:42 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.071 10:56:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.071 10:56:42 -- common/autotest_common.sh@10 -- # set +x 00:10:46.071 [2024-05-15 10:56:42.658381] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:46.071 [2024-05-15 10:56:42.676679] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.071 10:56:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.071 10:56:42 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:46.071 10:56:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.071 10:56:42 -- common/autotest_common.sh@10 -- # set +x 00:10:46.071 NULL1 00:10:46.071 10:56:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.071 10:56:42 -- target/connect_stress.sh@21 -- # PERF_PID=228159 00:10:46.071 10:56:42 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:46.071 10:56:42 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:46.071 10:56:42 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:46.071 10:56:42 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:46.071 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.071 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.071 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.071 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.071 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.071 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.071 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.071 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.072 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.072 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:46.333 10:56:42 -- target/connect_stress.sh@28 -- # cat 00:10:46.333 10:56:42 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:46.333 10:56:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.333 10:56:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.333 10:56:42 -- common/autotest_common.sh@10 -- # set +x 00:10:46.594 10:56:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.594 10:56:43 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:46.594 10:56:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.594 10:56:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.594 10:56:43 -- common/autotest_common.sh@10 -- # set +x 00:10:46.855 10:56:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.855 10:56:43 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:46.855 10:56:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.855 10:56:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.855 10:56:43 -- common/autotest_common.sh@10 -- # set +x 00:10:47.116 10:56:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.116 10:56:43 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:47.116 10:56:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.116 10:56:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.116 10:56:43 -- common/autotest_common.sh@10 -- # set +x 00:10:47.688 10:56:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.688 10:56:44 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:47.688 10:56:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.688 10:56:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.688 10:56:44 -- common/autotest_common.sh@10 -- # set +x 00:10:47.959 10:56:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.959 10:56:44 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:47.959 10:56:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.959 10:56:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.959 10:56:44 -- common/autotest_common.sh@10 -- # set +x 00:10:48.220 10:56:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.220 10:56:44 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:48.220 10:56:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.220 10:56:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.220 10:56:44 -- common/autotest_common.sh@10 -- # set +x 00:10:48.481 10:56:45 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.481 10:56:45 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:48.481 10:56:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.481 10:56:45 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.481 10:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:48.743 10:56:45 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.743 10:56:45 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:48.743 10:56:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.743 10:56:45 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.743 10:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:49.315 10:56:45 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.315 10:56:45 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:49.315 10:56:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.315 10:56:45 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.315 10:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:49.576 10:56:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.576 10:56:46 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:49.577 10:56:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.577 10:56:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.577 10:56:46 -- common/autotest_common.sh@10 -- # set +x 00:10:49.838 10:56:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.838 10:56:46 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:49.838 10:56:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.838 10:56:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.838 10:56:46 -- common/autotest_common.sh@10 -- # set +x 00:10:50.098 10:56:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.098 10:56:46 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:50.098 10:56:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.098 10:56:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.098 10:56:46 -- common/autotest_common.sh@10 -- # set +x 00:10:50.672 10:56:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.672 10:56:47 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:50.672 10:56:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.672 10:56:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.672 10:56:47 -- common/autotest_common.sh@10 -- # set +x 00:10:50.933 10:56:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.933 10:56:47 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:50.933 10:56:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.933 10:56:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.933 10:56:47 -- common/autotest_common.sh@10 -- # set +x 00:10:51.194 10:56:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.194 10:56:47 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:51.195 10:56:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.195 10:56:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.195 10:56:47 -- common/autotest_common.sh@10 -- # set +x 00:10:51.456 10:56:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.456 10:56:47 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:51.456 10:56:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.456 10:56:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.456 10:56:47 -- common/autotest_common.sh@10 -- # set +x 00:10:51.717 10:56:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.717 10:56:48 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:51.717 10:56:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.717 10:56:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.717 10:56:48 -- common/autotest_common.sh@10 -- # set +x 00:10:52.291 10:56:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.291 10:56:48 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:52.291 10:56:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.291 10:56:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.291 10:56:48 -- common/autotest_common.sh@10 -- # set +x 00:10:52.552 10:56:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.552 10:56:48 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:52.552 10:56:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.552 10:56:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.552 10:56:48 -- common/autotest_common.sh@10 -- # set +x 00:10:52.812 10:56:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.812 10:56:49 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:52.812 10:56:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.812 10:56:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.812 10:56:49 -- common/autotest_common.sh@10 -- # set +x 00:10:53.072 10:56:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.073 10:56:49 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:53.073 10:56:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.073 10:56:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.073 10:56:49 -- common/autotest_common.sh@10 -- # set +x 00:10:53.332 10:56:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.332 10:56:49 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:53.332 10:56:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.332 10:56:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.332 10:56:49 -- common/autotest_common.sh@10 -- # set +x 00:10:53.904 10:56:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.904 10:56:50 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:53.904 10:56:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.904 10:56:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.904 10:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:54.165 10:56:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.165 10:56:50 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:54.165 10:56:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.165 10:56:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.165 10:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 10:56:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.426 10:56:50 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:54.426 10:56:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.426 10:56:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.426 10:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:54.688 10:56:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.688 10:56:51 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:54.688 10:56:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.688 10:56:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.688 10:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:54.948 10:56:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.948 10:56:51 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:54.948 10:56:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.948 10:56:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.948 10:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:55.521 10:56:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.522 10:56:51 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:55.522 10:56:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.522 10:56:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.522 10:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:55.791 10:56:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.791 10:56:52 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:55.791 10:56:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.791 10:56:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.791 10:56:52 -- common/autotest_common.sh@10 -- # set +x 00:10:56.053 10:56:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.053 10:56:52 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:56.053 10:56:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.053 10:56:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.053 10:56:52 -- common/autotest_common.sh@10 -- # set +x 00:10:56.314 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:56.314 10:56:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.314 10:56:52 -- target/connect_stress.sh@34 -- # kill -0 228159 00:10:56.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (228159) - No such process 00:10:56.314 10:56:52 -- target/connect_stress.sh@38 -- # wait 228159 00:10:56.314 10:56:52 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:56.314 10:56:52 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:56.314 10:56:52 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:56.314 10:56:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:56.314 10:56:52 -- nvmf/common.sh@117 -- # sync 00:10:56.314 10:56:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.314 10:56:52 -- nvmf/common.sh@120 -- # set +e 00:10:56.314 10:56:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.314 10:56:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.314 rmmod nvme_tcp 00:10:56.314 rmmod nvme_fabrics 00:10:56.314 rmmod nvme_keyring 00:10:56.314 10:56:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.314 10:56:52 -- nvmf/common.sh@124 -- # set -e 00:10:56.314 10:56:52 -- nvmf/common.sh@125 -- # return 0 00:10:56.314 10:56:52 -- nvmf/common.sh@478 -- # '[' -n 227868 ']' 00:10:56.314 10:56:52 -- nvmf/common.sh@479 -- # killprocess 227868 00:10:56.315 10:56:52 -- common/autotest_common.sh@946 -- # '[' -z 227868 ']' 00:10:56.315 10:56:52 -- common/autotest_common.sh@950 -- # kill -0 227868 00:10:56.315 10:56:52 -- common/autotest_common.sh@951 -- # uname 00:10:56.315 10:56:52 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:56.315 10:56:52 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 227868 00:10:56.577 10:56:53 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:56.577 10:56:53 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:56.577 10:56:53 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 227868' 00:10:56.577 killing process with pid 227868 00:10:56.577 10:56:53 -- common/autotest_common.sh@965 -- # kill 227868 00:10:56.577 [2024-05-15 10:56:53.005466] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:56.577 10:56:53 -- common/autotest_common.sh@970 -- # wait 227868 00:10:56.577 10:56:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:56.577 10:56:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:56.577 10:56:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:56.577 10:56:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.577 10:56:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.577 10:56:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.577 10:56:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.577 10:56:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.123 10:56:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.123 00:10:59.123 real 0m20.570s 00:10:59.123 user 0m43.589s 00:10:59.123 sys 0m7.031s 00:10:59.123 10:56:55 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:59.123 10:56:55 -- common/autotest_common.sh@10 -- # set +x 00:10:59.123 ************************************ 00:10:59.123 END TEST nvmf_connect_stress 00:10:59.123 ************************************ 00:10:59.123 10:56:55 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:59.123 10:56:55 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:59.123 10:56:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:59.123 10:56:55 -- common/autotest_common.sh@10 -- # set +x 00:10:59.123 ************************************ 00:10:59.123 START TEST nvmf_fused_ordering 00:10:59.123 ************************************ 00:10:59.123 10:56:55 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:59.123 * Looking for test storage... 00:10:59.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.123 10:56:55 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.123 10:56:55 -- nvmf/common.sh@7 -- # uname -s 00:10:59.123 10:56:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.123 10:56:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.123 10:56:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.123 10:56:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.123 10:56:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.123 10:56:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.123 10:56:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.123 10:56:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.123 10:56:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.123 10:56:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.123 10:56:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:59.123 10:56:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:59.123 10:56:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.123 10:56:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.123 10:56:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.123 10:56:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.123 10:56:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.123 10:56:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.123 10:56:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.123 10:56:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.123 10:56:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.123 10:56:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.123 10:56:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.123 10:56:55 -- paths/export.sh@5 -- # export PATH 00:10:59.123 10:56:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.123 10:56:55 -- nvmf/common.sh@47 -- # : 0 00:10:59.123 10:56:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.123 10:56:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.124 10:56:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.124 10:56:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.124 10:56:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.124 10:56:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.124 10:56:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.124 10:56:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.124 10:56:55 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:59.124 10:56:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:59.124 10:56:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.124 10:56:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:59.124 10:56:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:59.124 10:56:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:59.124 10:56:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.124 10:56:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.124 10:56:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.124 10:56:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:59.124 10:56:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:59.124 10:56:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.124 10:56:55 -- common/autotest_common.sh@10 -- # set +x 00:11:05.722 10:57:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:05.722 10:57:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.722 10:57:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.722 10:57:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.722 10:57:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.722 10:57:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.722 10:57:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.722 10:57:01 -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.722 10:57:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.722 10:57:01 -- nvmf/common.sh@296 -- # e810=() 00:11:05.722 10:57:01 -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.722 10:57:01 -- nvmf/common.sh@297 -- # x722=() 00:11:05.722 10:57:01 -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.722 10:57:01 -- nvmf/common.sh@298 -- # mlx=() 00:11:05.722 10:57:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.722 10:57:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.722 10:57:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.722 10:57:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.722 10:57:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.722 10:57:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.722 10:57:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.722 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.722 10:57:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.722 10:57:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.722 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.722 10:57:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.722 10:57:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.722 10:57:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.722 10:57:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.722 10:57:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:05.722 10:57:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.722 10:57:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.722 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.722 10:57:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.722 10:57:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.722 10:57:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.722 10:57:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:05.722 10:57:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.722 10:57:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.722 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.722 10:57:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.723 10:57:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:05.723 10:57:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:05.723 10:57:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:05.723 10:57:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:05.723 10:57:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:05.723 10:57:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.723 10:57:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.723 10:57:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.723 10:57:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.723 10:57:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.723 10:57:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.723 10:57:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.723 10:57:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.723 10:57:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.723 10:57:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.723 10:57:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.723 10:57:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.723 10:57:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.723 10:57:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.723 10:57:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.723 10:57:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.723 10:57:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.723 10:57:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.723 10:57:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.723 10:57:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:11:05.723 00:11:05.723 --- 10.0.0.2 ping statistics --- 00:11:05.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.723 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:11:05.723 10:57:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:11:05.723 00:11:05.723 --- 10.0.0.1 ping statistics --- 00:11:05.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.723 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:11:05.723 10:57:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.723 10:57:02 -- nvmf/common.sh@411 -- # return 0 00:11:05.723 10:57:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:05.723 10:57:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.723 10:57:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:05.723 10:57:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:05.723 10:57:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.723 10:57:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:05.723 10:57:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:05.723 10:57:02 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:05.723 10:57:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:05.723 10:57:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:05.723 10:57:02 -- common/autotest_common.sh@10 -- # set +x 00:11:05.723 10:57:02 -- nvmf/common.sh@470 -- # nvmfpid=234390 00:11:05.723 10:57:02 -- nvmf/common.sh@471 -- # waitforlisten 234390 00:11:05.723 10:57:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:05.723 10:57:02 -- common/autotest_common.sh@827 -- # '[' -z 234390 ']' 00:11:05.723 10:57:02 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.723 10:57:02 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:05.723 10:57:02 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.723 10:57:02 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:05.723 10:57:02 -- common/autotest_common.sh@10 -- # set +x 00:11:05.723 [2024-05-15 10:57:02.359858] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:11:05.723 [2024-05-15 10:57:02.359923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.985 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.985 [2024-05-15 10:57:02.444695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.985 [2024-05-15 10:57:02.536901] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.985 [2024-05-15 10:57:02.536956] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.985 [2024-05-15 10:57:02.536966] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.985 [2024-05-15 10:57:02.536974] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.985 [2024-05-15 10:57:02.536981] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.985 [2024-05-15 10:57:02.537004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.557 10:57:03 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:06.558 10:57:03 -- common/autotest_common.sh@860 -- # return 0 00:11:06.558 10:57:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:06.558 10:57:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.558 10:57:03 -- common/autotest_common.sh@10 -- # set +x 00:11:06.558 10:57:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.558 10:57:03 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.558 10:57:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.558 10:57:03 -- common/autotest_common.sh@10 -- # set +x 00:11:06.558 [2024-05-15 10:57:03.194540] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.558 10:57:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.558 10:57:03 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.558 10:57:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.558 10:57:03 -- common/autotest_common.sh@10 -- # set +x 00:11:06.818 10:57:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.818 10:57:03 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.818 10:57:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.818 10:57:03 -- common/autotest_common.sh@10 -- # set +x 00:11:06.818 [2024-05-15 10:57:03.218533] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:06.818 [2024-05-15 10:57:03.218777] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.818 10:57:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.818 10:57:03 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:06.818 10:57:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.818 10:57:03 -- common/autotest_common.sh@10 -- # set +x 00:11:06.818 NULL1 00:11:06.818 10:57:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.818 10:57:03 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:06.818 10:57:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.818 10:57:03 -- common/autotest_common.sh@10 -- # set +x 00:11:06.818 10:57:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.818 10:57:03 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:06.818 10:57:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.818 10:57:03 -- common/autotest_common.sh@10 -- # set +x 00:11:06.818 10:57:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.818 10:57:03 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:06.818 [2024-05-15 10:57:03.285599] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:11:06.818 [2024-05-15 10:57:03.285640] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234639 ] 00:11:06.818 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.390 Attached to nqn.2016-06.io.spdk:cnode1 00:11:07.390 Namespace ID: 1 size: 1GB 00:11:07.390 fused_ordering(0) 00:11:07.390 fused_ordering(1) 00:11:07.390 fused_ordering(2) 00:11:07.390 fused_ordering(3) 00:11:07.390 fused_ordering(4) 00:11:07.390 fused_ordering(5) 00:11:07.390 fused_ordering(6) 00:11:07.390 fused_ordering(7) 00:11:07.390 fused_ordering(8) 00:11:07.390 fused_ordering(9) 00:11:07.390 fused_ordering(10) 00:11:07.390 fused_ordering(11) 00:11:07.390 fused_ordering(12) 00:11:07.390 fused_ordering(13) 00:11:07.390 fused_ordering(14) 00:11:07.390 fused_ordering(15) 00:11:07.390 fused_ordering(16) 00:11:07.390 fused_ordering(17) 00:11:07.390 fused_ordering(18) 00:11:07.390 fused_ordering(19) 00:11:07.390 fused_ordering(20) 00:11:07.390 fused_ordering(21) 00:11:07.390 fused_ordering(22) 00:11:07.390 fused_ordering(23) 00:11:07.390 fused_ordering(24) 00:11:07.390 fused_ordering(25) 00:11:07.390 fused_ordering(26) 00:11:07.390 fused_ordering(27) 00:11:07.390 fused_ordering(28) 00:11:07.390 fused_ordering(29) 00:11:07.390 fused_ordering(30) 00:11:07.390 fused_ordering(31) 00:11:07.390 fused_ordering(32) 00:11:07.390 fused_ordering(33) 00:11:07.390 fused_ordering(34) 00:11:07.390 fused_ordering(35) 00:11:07.390 fused_ordering(36) 00:11:07.390 fused_ordering(37) 00:11:07.390 fused_ordering(38) 00:11:07.390 fused_ordering(39) 00:11:07.390 fused_ordering(40) 00:11:07.390 fused_ordering(41) 00:11:07.390 fused_ordering(42) 00:11:07.390 fused_ordering(43) 00:11:07.390 fused_ordering(44) 00:11:07.390 fused_ordering(45) 00:11:07.390 fused_ordering(46) 00:11:07.390 fused_ordering(47) 00:11:07.390 fused_ordering(48) 00:11:07.390 fused_ordering(49) 00:11:07.390 fused_ordering(50) 00:11:07.390 fused_ordering(51) 00:11:07.390 fused_ordering(52) 00:11:07.390 fused_ordering(53) 00:11:07.390 fused_ordering(54) 00:11:07.390 fused_ordering(55) 00:11:07.390 fused_ordering(56) 00:11:07.390 fused_ordering(57) 00:11:07.390 fused_ordering(58) 00:11:07.390 fused_ordering(59) 00:11:07.390 fused_ordering(60) 00:11:07.390 fused_ordering(61) 00:11:07.390 fused_ordering(62) 00:11:07.390 fused_ordering(63) 00:11:07.390 fused_ordering(64) 00:11:07.390 fused_ordering(65) 00:11:07.390 fused_ordering(66) 00:11:07.390 fused_ordering(67) 00:11:07.390 fused_ordering(68) 00:11:07.390 fused_ordering(69) 00:11:07.390 fused_ordering(70) 00:11:07.390 fused_ordering(71) 00:11:07.391 fused_ordering(72) 00:11:07.391 fused_ordering(73) 00:11:07.391 fused_ordering(74) 00:11:07.391 fused_ordering(75) 00:11:07.391 fused_ordering(76) 00:11:07.391 fused_ordering(77) 00:11:07.391 fused_ordering(78) 00:11:07.391 fused_ordering(79) 00:11:07.391 fused_ordering(80) 00:11:07.391 fused_ordering(81) 00:11:07.391 fused_ordering(82) 00:11:07.391 fused_ordering(83) 00:11:07.391 fused_ordering(84) 00:11:07.391 fused_ordering(85) 00:11:07.391 fused_ordering(86) 00:11:07.391 fused_ordering(87) 00:11:07.391 fused_ordering(88) 00:11:07.391 fused_ordering(89) 00:11:07.391 fused_ordering(90) 00:11:07.391 fused_ordering(91) 00:11:07.391 fused_ordering(92) 00:11:07.391 fused_ordering(93) 00:11:07.391 fused_ordering(94) 00:11:07.391 fused_ordering(95) 00:11:07.391 fused_ordering(96) 00:11:07.391 fused_ordering(97) 00:11:07.391 fused_ordering(98) 00:11:07.391 fused_ordering(99) 00:11:07.391 fused_ordering(100) 00:11:07.391 fused_ordering(101) 00:11:07.391 fused_ordering(102) 00:11:07.391 fused_ordering(103) 00:11:07.391 fused_ordering(104) 00:11:07.391 fused_ordering(105) 00:11:07.391 fused_ordering(106) 00:11:07.391 fused_ordering(107) 00:11:07.391 fused_ordering(108) 00:11:07.391 fused_ordering(109) 00:11:07.391 fused_ordering(110) 00:11:07.391 fused_ordering(111) 00:11:07.391 fused_ordering(112) 00:11:07.391 fused_ordering(113) 00:11:07.391 fused_ordering(114) 00:11:07.391 fused_ordering(115) 00:11:07.391 fused_ordering(116) 00:11:07.391 fused_ordering(117) 00:11:07.391 fused_ordering(118) 00:11:07.391 fused_ordering(119) 00:11:07.391 fused_ordering(120) 00:11:07.391 fused_ordering(121) 00:11:07.391 fused_ordering(122) 00:11:07.391 fused_ordering(123) 00:11:07.391 fused_ordering(124) 00:11:07.391 fused_ordering(125) 00:11:07.391 fused_ordering(126) 00:11:07.391 fused_ordering(127) 00:11:07.391 fused_ordering(128) 00:11:07.391 fused_ordering(129) 00:11:07.391 fused_ordering(130) 00:11:07.391 fused_ordering(131) 00:11:07.391 fused_ordering(132) 00:11:07.391 fused_ordering(133) 00:11:07.391 fused_ordering(134) 00:11:07.391 fused_ordering(135) 00:11:07.391 fused_ordering(136) 00:11:07.391 fused_ordering(137) 00:11:07.391 fused_ordering(138) 00:11:07.391 fused_ordering(139) 00:11:07.391 fused_ordering(140) 00:11:07.391 fused_ordering(141) 00:11:07.391 fused_ordering(142) 00:11:07.391 fused_ordering(143) 00:11:07.391 fused_ordering(144) 00:11:07.391 fused_ordering(145) 00:11:07.391 fused_ordering(146) 00:11:07.391 fused_ordering(147) 00:11:07.391 fused_ordering(148) 00:11:07.391 fused_ordering(149) 00:11:07.391 fused_ordering(150) 00:11:07.391 fused_ordering(151) 00:11:07.391 fused_ordering(152) 00:11:07.391 fused_ordering(153) 00:11:07.391 fused_ordering(154) 00:11:07.391 fused_ordering(155) 00:11:07.391 fused_ordering(156) 00:11:07.391 fused_ordering(157) 00:11:07.391 fused_ordering(158) 00:11:07.391 fused_ordering(159) 00:11:07.391 fused_ordering(160) 00:11:07.391 fused_ordering(161) 00:11:07.391 fused_ordering(162) 00:11:07.391 fused_ordering(163) 00:11:07.391 fused_ordering(164) 00:11:07.391 fused_ordering(165) 00:11:07.391 fused_ordering(166) 00:11:07.391 fused_ordering(167) 00:11:07.391 fused_ordering(168) 00:11:07.391 fused_ordering(169) 00:11:07.391 fused_ordering(170) 00:11:07.391 fused_ordering(171) 00:11:07.391 fused_ordering(172) 00:11:07.391 fused_ordering(173) 00:11:07.391 fused_ordering(174) 00:11:07.391 fused_ordering(175) 00:11:07.391 fused_ordering(176) 00:11:07.391 fused_ordering(177) 00:11:07.391 fused_ordering(178) 00:11:07.391 fused_ordering(179) 00:11:07.391 fused_ordering(180) 00:11:07.391 fused_ordering(181) 00:11:07.391 fused_ordering(182) 00:11:07.391 fused_ordering(183) 00:11:07.391 fused_ordering(184) 00:11:07.391 fused_ordering(185) 00:11:07.391 fused_ordering(186) 00:11:07.391 fused_ordering(187) 00:11:07.391 fused_ordering(188) 00:11:07.391 fused_ordering(189) 00:11:07.391 fused_ordering(190) 00:11:07.391 fused_ordering(191) 00:11:07.391 fused_ordering(192) 00:11:07.391 fused_ordering(193) 00:11:07.391 fused_ordering(194) 00:11:07.391 fused_ordering(195) 00:11:07.391 fused_ordering(196) 00:11:07.391 fused_ordering(197) 00:11:07.391 fused_ordering(198) 00:11:07.391 fused_ordering(199) 00:11:07.391 fused_ordering(200) 00:11:07.391 fused_ordering(201) 00:11:07.391 fused_ordering(202) 00:11:07.391 fused_ordering(203) 00:11:07.391 fused_ordering(204) 00:11:07.391 fused_ordering(205) 00:11:07.652 fused_ordering(206) 00:11:07.652 fused_ordering(207) 00:11:07.652 fused_ordering(208) 00:11:07.652 fused_ordering(209) 00:11:07.652 fused_ordering(210) 00:11:07.652 fused_ordering(211) 00:11:07.652 fused_ordering(212) 00:11:07.652 fused_ordering(213) 00:11:07.652 fused_ordering(214) 00:11:07.652 fused_ordering(215) 00:11:07.652 fused_ordering(216) 00:11:07.652 fused_ordering(217) 00:11:07.652 fused_ordering(218) 00:11:07.652 fused_ordering(219) 00:11:07.652 fused_ordering(220) 00:11:07.652 fused_ordering(221) 00:11:07.652 fused_ordering(222) 00:11:07.652 fused_ordering(223) 00:11:07.652 fused_ordering(224) 00:11:07.652 fused_ordering(225) 00:11:07.652 fused_ordering(226) 00:11:07.652 fused_ordering(227) 00:11:07.652 fused_ordering(228) 00:11:07.652 fused_ordering(229) 00:11:07.652 fused_ordering(230) 00:11:07.652 fused_ordering(231) 00:11:07.652 fused_ordering(232) 00:11:07.652 fused_ordering(233) 00:11:07.652 fused_ordering(234) 00:11:07.652 fused_ordering(235) 00:11:07.652 fused_ordering(236) 00:11:07.652 fused_ordering(237) 00:11:07.652 fused_ordering(238) 00:11:07.652 fused_ordering(239) 00:11:07.652 fused_ordering(240) 00:11:07.652 fused_ordering(241) 00:11:07.652 fused_ordering(242) 00:11:07.652 fused_ordering(243) 00:11:07.652 fused_ordering(244) 00:11:07.652 fused_ordering(245) 00:11:07.652 fused_ordering(246) 00:11:07.652 fused_ordering(247) 00:11:07.652 fused_ordering(248) 00:11:07.652 fused_ordering(249) 00:11:07.652 fused_ordering(250) 00:11:07.652 fused_ordering(251) 00:11:07.652 fused_ordering(252) 00:11:07.652 fused_ordering(253) 00:11:07.652 fused_ordering(254) 00:11:07.652 fused_ordering(255) 00:11:07.652 fused_ordering(256) 00:11:07.652 fused_ordering(257) 00:11:07.652 fused_ordering(258) 00:11:07.652 fused_ordering(259) 00:11:07.652 fused_ordering(260) 00:11:07.652 fused_ordering(261) 00:11:07.652 fused_ordering(262) 00:11:07.652 fused_ordering(263) 00:11:07.652 fused_ordering(264) 00:11:07.652 fused_ordering(265) 00:11:07.652 fused_ordering(266) 00:11:07.652 fused_ordering(267) 00:11:07.652 fused_ordering(268) 00:11:07.652 fused_ordering(269) 00:11:07.652 fused_ordering(270) 00:11:07.652 fused_ordering(271) 00:11:07.652 fused_ordering(272) 00:11:07.652 fused_ordering(273) 00:11:07.652 fused_ordering(274) 00:11:07.652 fused_ordering(275) 00:11:07.652 fused_ordering(276) 00:11:07.652 fused_ordering(277) 00:11:07.652 fused_ordering(278) 00:11:07.652 fused_ordering(279) 00:11:07.652 fused_ordering(280) 00:11:07.652 fused_ordering(281) 00:11:07.652 fused_ordering(282) 00:11:07.652 fused_ordering(283) 00:11:07.652 fused_ordering(284) 00:11:07.652 fused_ordering(285) 00:11:07.652 fused_ordering(286) 00:11:07.652 fused_ordering(287) 00:11:07.652 fused_ordering(288) 00:11:07.652 fused_ordering(289) 00:11:07.652 fused_ordering(290) 00:11:07.652 fused_ordering(291) 00:11:07.652 fused_ordering(292) 00:11:07.652 fused_ordering(293) 00:11:07.652 fused_ordering(294) 00:11:07.652 fused_ordering(295) 00:11:07.653 fused_ordering(296) 00:11:07.653 fused_ordering(297) 00:11:07.653 fused_ordering(298) 00:11:07.653 fused_ordering(299) 00:11:07.653 fused_ordering(300) 00:11:07.653 fused_ordering(301) 00:11:07.653 fused_ordering(302) 00:11:07.653 fused_ordering(303) 00:11:07.653 fused_ordering(304) 00:11:07.653 fused_ordering(305) 00:11:07.653 fused_ordering(306) 00:11:07.653 fused_ordering(307) 00:11:07.653 fused_ordering(308) 00:11:07.653 fused_ordering(309) 00:11:07.653 fused_ordering(310) 00:11:07.653 fused_ordering(311) 00:11:07.653 fused_ordering(312) 00:11:07.653 fused_ordering(313) 00:11:07.653 fused_ordering(314) 00:11:07.653 fused_ordering(315) 00:11:07.653 fused_ordering(316) 00:11:07.653 fused_ordering(317) 00:11:07.653 fused_ordering(318) 00:11:07.653 fused_ordering(319) 00:11:07.653 fused_ordering(320) 00:11:07.653 fused_ordering(321) 00:11:07.653 fused_ordering(322) 00:11:07.653 fused_ordering(323) 00:11:07.653 fused_ordering(324) 00:11:07.653 fused_ordering(325) 00:11:07.653 fused_ordering(326) 00:11:07.653 fused_ordering(327) 00:11:07.653 fused_ordering(328) 00:11:07.653 fused_ordering(329) 00:11:07.653 fused_ordering(330) 00:11:07.653 fused_ordering(331) 00:11:07.653 fused_ordering(332) 00:11:07.653 fused_ordering(333) 00:11:07.653 fused_ordering(334) 00:11:07.653 fused_ordering(335) 00:11:07.653 fused_ordering(336) 00:11:07.653 fused_ordering(337) 00:11:07.653 fused_ordering(338) 00:11:07.653 fused_ordering(339) 00:11:07.653 fused_ordering(340) 00:11:07.653 fused_ordering(341) 00:11:07.653 fused_ordering(342) 00:11:07.653 fused_ordering(343) 00:11:07.653 fused_ordering(344) 00:11:07.653 fused_ordering(345) 00:11:07.653 fused_ordering(346) 00:11:07.653 fused_ordering(347) 00:11:07.653 fused_ordering(348) 00:11:07.653 fused_ordering(349) 00:11:07.653 fused_ordering(350) 00:11:07.653 fused_ordering(351) 00:11:07.653 fused_ordering(352) 00:11:07.653 fused_ordering(353) 00:11:07.653 fused_ordering(354) 00:11:07.653 fused_ordering(355) 00:11:07.653 fused_ordering(356) 00:11:07.653 fused_ordering(357) 00:11:07.653 fused_ordering(358) 00:11:07.653 fused_ordering(359) 00:11:07.653 fused_ordering(360) 00:11:07.653 fused_ordering(361) 00:11:07.653 fused_ordering(362) 00:11:07.653 fused_ordering(363) 00:11:07.653 fused_ordering(364) 00:11:07.653 fused_ordering(365) 00:11:07.653 fused_ordering(366) 00:11:07.653 fused_ordering(367) 00:11:07.653 fused_ordering(368) 00:11:07.653 fused_ordering(369) 00:11:07.653 fused_ordering(370) 00:11:07.653 fused_ordering(371) 00:11:07.653 fused_ordering(372) 00:11:07.653 fused_ordering(373) 00:11:07.653 fused_ordering(374) 00:11:07.653 fused_ordering(375) 00:11:07.653 fused_ordering(376) 00:11:07.653 fused_ordering(377) 00:11:07.653 fused_ordering(378) 00:11:07.653 fused_ordering(379) 00:11:07.653 fused_ordering(380) 00:11:07.653 fused_ordering(381) 00:11:07.653 fused_ordering(382) 00:11:07.653 fused_ordering(383) 00:11:07.653 fused_ordering(384) 00:11:07.653 fused_ordering(385) 00:11:07.653 fused_ordering(386) 00:11:07.653 fused_ordering(387) 00:11:07.653 fused_ordering(388) 00:11:07.653 fused_ordering(389) 00:11:07.653 fused_ordering(390) 00:11:07.653 fused_ordering(391) 00:11:07.653 fused_ordering(392) 00:11:07.653 fused_ordering(393) 00:11:07.653 fused_ordering(394) 00:11:07.653 fused_ordering(395) 00:11:07.653 fused_ordering(396) 00:11:07.653 fused_ordering(397) 00:11:07.653 fused_ordering(398) 00:11:07.653 fused_ordering(399) 00:11:07.653 fused_ordering(400) 00:11:07.653 fused_ordering(401) 00:11:07.653 fused_ordering(402) 00:11:07.653 fused_ordering(403) 00:11:07.653 fused_ordering(404) 00:11:07.653 fused_ordering(405) 00:11:07.653 fused_ordering(406) 00:11:07.653 fused_ordering(407) 00:11:07.653 fused_ordering(408) 00:11:07.653 fused_ordering(409) 00:11:07.653 fused_ordering(410) 00:11:08.224 fused_ordering(411) 00:11:08.224 fused_ordering(412) 00:11:08.224 fused_ordering(413) 00:11:08.224 fused_ordering(414) 00:11:08.224 fused_ordering(415) 00:11:08.224 fused_ordering(416) 00:11:08.224 fused_ordering(417) 00:11:08.224 fused_ordering(418) 00:11:08.224 fused_ordering(419) 00:11:08.224 fused_ordering(420) 00:11:08.224 fused_ordering(421) 00:11:08.224 fused_ordering(422) 00:11:08.224 fused_ordering(423) 00:11:08.224 fused_ordering(424) 00:11:08.224 fused_ordering(425) 00:11:08.224 fused_ordering(426) 00:11:08.224 fused_ordering(427) 00:11:08.224 fused_ordering(428) 00:11:08.224 fused_ordering(429) 00:11:08.224 fused_ordering(430) 00:11:08.224 fused_ordering(431) 00:11:08.224 fused_ordering(432) 00:11:08.224 fused_ordering(433) 00:11:08.224 fused_ordering(434) 00:11:08.224 fused_ordering(435) 00:11:08.224 fused_ordering(436) 00:11:08.224 fused_ordering(437) 00:11:08.224 fused_ordering(438) 00:11:08.224 fused_ordering(439) 00:11:08.224 fused_ordering(440) 00:11:08.224 fused_ordering(441) 00:11:08.224 fused_ordering(442) 00:11:08.224 fused_ordering(443) 00:11:08.224 fused_ordering(444) 00:11:08.224 fused_ordering(445) 00:11:08.224 fused_ordering(446) 00:11:08.224 fused_ordering(447) 00:11:08.224 fused_ordering(448) 00:11:08.224 fused_ordering(449) 00:11:08.224 fused_ordering(450) 00:11:08.224 fused_ordering(451) 00:11:08.224 fused_ordering(452) 00:11:08.224 fused_ordering(453) 00:11:08.224 fused_ordering(454) 00:11:08.224 fused_ordering(455) 00:11:08.224 fused_ordering(456) 00:11:08.224 fused_ordering(457) 00:11:08.224 fused_ordering(458) 00:11:08.224 fused_ordering(459) 00:11:08.224 fused_ordering(460) 00:11:08.224 fused_ordering(461) 00:11:08.224 fused_ordering(462) 00:11:08.224 fused_ordering(463) 00:11:08.224 fused_ordering(464) 00:11:08.224 fused_ordering(465) 00:11:08.224 fused_ordering(466) 00:11:08.224 fused_ordering(467) 00:11:08.224 fused_ordering(468) 00:11:08.224 fused_ordering(469) 00:11:08.224 fused_ordering(470) 00:11:08.224 fused_ordering(471) 00:11:08.224 fused_ordering(472) 00:11:08.224 fused_ordering(473) 00:11:08.224 fused_ordering(474) 00:11:08.224 fused_ordering(475) 00:11:08.224 fused_ordering(476) 00:11:08.224 fused_ordering(477) 00:11:08.224 fused_ordering(478) 00:11:08.224 fused_ordering(479) 00:11:08.224 fused_ordering(480) 00:11:08.224 fused_ordering(481) 00:11:08.224 fused_ordering(482) 00:11:08.224 fused_ordering(483) 00:11:08.224 fused_ordering(484) 00:11:08.224 fused_ordering(485) 00:11:08.224 fused_ordering(486) 00:11:08.224 fused_ordering(487) 00:11:08.224 fused_ordering(488) 00:11:08.224 fused_ordering(489) 00:11:08.224 fused_ordering(490) 00:11:08.224 fused_ordering(491) 00:11:08.224 fused_ordering(492) 00:11:08.224 fused_ordering(493) 00:11:08.224 fused_ordering(494) 00:11:08.224 fused_ordering(495) 00:11:08.224 fused_ordering(496) 00:11:08.224 fused_ordering(497) 00:11:08.224 fused_ordering(498) 00:11:08.224 fused_ordering(499) 00:11:08.224 fused_ordering(500) 00:11:08.224 fused_ordering(501) 00:11:08.224 fused_ordering(502) 00:11:08.224 fused_ordering(503) 00:11:08.224 fused_ordering(504) 00:11:08.224 fused_ordering(505) 00:11:08.224 fused_ordering(506) 00:11:08.224 fused_ordering(507) 00:11:08.224 fused_ordering(508) 00:11:08.224 fused_ordering(509) 00:11:08.224 fused_ordering(510) 00:11:08.224 fused_ordering(511) 00:11:08.224 fused_ordering(512) 00:11:08.224 fused_ordering(513) 00:11:08.224 fused_ordering(514) 00:11:08.224 fused_ordering(515) 00:11:08.224 fused_ordering(516) 00:11:08.224 fused_ordering(517) 00:11:08.224 fused_ordering(518) 00:11:08.224 fused_ordering(519) 00:11:08.224 fused_ordering(520) 00:11:08.224 fused_ordering(521) 00:11:08.224 fused_ordering(522) 00:11:08.224 fused_ordering(523) 00:11:08.224 fused_ordering(524) 00:11:08.224 fused_ordering(525) 00:11:08.224 fused_ordering(526) 00:11:08.224 fused_ordering(527) 00:11:08.224 fused_ordering(528) 00:11:08.224 fused_ordering(529) 00:11:08.224 fused_ordering(530) 00:11:08.224 fused_ordering(531) 00:11:08.224 fused_ordering(532) 00:11:08.224 fused_ordering(533) 00:11:08.224 fused_ordering(534) 00:11:08.224 fused_ordering(535) 00:11:08.224 fused_ordering(536) 00:11:08.224 fused_ordering(537) 00:11:08.224 fused_ordering(538) 00:11:08.224 fused_ordering(539) 00:11:08.224 fused_ordering(540) 00:11:08.224 fused_ordering(541) 00:11:08.224 fused_ordering(542) 00:11:08.224 fused_ordering(543) 00:11:08.224 fused_ordering(544) 00:11:08.224 fused_ordering(545) 00:11:08.224 fused_ordering(546) 00:11:08.224 fused_ordering(547) 00:11:08.224 fused_ordering(548) 00:11:08.224 fused_ordering(549) 00:11:08.224 fused_ordering(550) 00:11:08.224 fused_ordering(551) 00:11:08.224 fused_ordering(552) 00:11:08.224 fused_ordering(553) 00:11:08.224 fused_ordering(554) 00:11:08.224 fused_ordering(555) 00:11:08.224 fused_ordering(556) 00:11:08.224 fused_ordering(557) 00:11:08.224 fused_ordering(558) 00:11:08.224 fused_ordering(559) 00:11:08.224 fused_ordering(560) 00:11:08.224 fused_ordering(561) 00:11:08.224 fused_ordering(562) 00:11:08.224 fused_ordering(563) 00:11:08.224 fused_ordering(564) 00:11:08.224 fused_ordering(565) 00:11:08.224 fused_ordering(566) 00:11:08.224 fused_ordering(567) 00:11:08.224 fused_ordering(568) 00:11:08.224 fused_ordering(569) 00:11:08.224 fused_ordering(570) 00:11:08.224 fused_ordering(571) 00:11:08.224 fused_ordering(572) 00:11:08.224 fused_ordering(573) 00:11:08.224 fused_ordering(574) 00:11:08.224 fused_ordering(575) 00:11:08.224 fused_ordering(576) 00:11:08.224 fused_ordering(577) 00:11:08.224 fused_ordering(578) 00:11:08.224 fused_ordering(579) 00:11:08.224 fused_ordering(580) 00:11:08.224 fused_ordering(581) 00:11:08.224 fused_ordering(582) 00:11:08.224 fused_ordering(583) 00:11:08.224 fused_ordering(584) 00:11:08.224 fused_ordering(585) 00:11:08.224 fused_ordering(586) 00:11:08.224 fused_ordering(587) 00:11:08.224 fused_ordering(588) 00:11:08.224 fused_ordering(589) 00:11:08.224 fused_ordering(590) 00:11:08.224 fused_ordering(591) 00:11:08.224 fused_ordering(592) 00:11:08.224 fused_ordering(593) 00:11:08.224 fused_ordering(594) 00:11:08.224 fused_ordering(595) 00:11:08.224 fused_ordering(596) 00:11:08.224 fused_ordering(597) 00:11:08.224 fused_ordering(598) 00:11:08.224 fused_ordering(599) 00:11:08.224 fused_ordering(600) 00:11:08.224 fused_ordering(601) 00:11:08.224 fused_ordering(602) 00:11:08.225 fused_ordering(603) 00:11:08.225 fused_ordering(604) 00:11:08.225 fused_ordering(605) 00:11:08.225 fused_ordering(606) 00:11:08.225 fused_ordering(607) 00:11:08.225 fused_ordering(608) 00:11:08.225 fused_ordering(609) 00:11:08.225 fused_ordering(610) 00:11:08.225 fused_ordering(611) 00:11:08.225 fused_ordering(612) 00:11:08.225 fused_ordering(613) 00:11:08.225 fused_ordering(614) 00:11:08.225 fused_ordering(615) 00:11:08.796 fused_ordering(616) 00:11:08.796 fused_ordering(617) 00:11:08.796 fused_ordering(618) 00:11:08.796 fused_ordering(619) 00:11:08.796 fused_ordering(620) 00:11:08.796 fused_ordering(621) 00:11:08.796 fused_ordering(622) 00:11:08.796 fused_ordering(623) 00:11:08.796 fused_ordering(624) 00:11:08.796 fused_ordering(625) 00:11:08.796 fused_ordering(626) 00:11:08.796 fused_ordering(627) 00:11:08.796 fused_ordering(628) 00:11:08.796 fused_ordering(629) 00:11:08.796 fused_ordering(630) 00:11:08.796 fused_ordering(631) 00:11:08.796 fused_ordering(632) 00:11:08.796 fused_ordering(633) 00:11:08.796 fused_ordering(634) 00:11:08.796 fused_ordering(635) 00:11:08.796 fused_ordering(636) 00:11:08.796 fused_ordering(637) 00:11:08.796 fused_ordering(638) 00:11:08.796 fused_ordering(639) 00:11:08.796 fused_ordering(640) 00:11:08.796 fused_ordering(641) 00:11:08.796 fused_ordering(642) 00:11:08.796 fused_ordering(643) 00:11:08.796 fused_ordering(644) 00:11:08.796 fused_ordering(645) 00:11:08.796 fused_ordering(646) 00:11:08.796 fused_ordering(647) 00:11:08.796 fused_ordering(648) 00:11:08.796 fused_ordering(649) 00:11:08.796 fused_ordering(650) 00:11:08.796 fused_ordering(651) 00:11:08.796 fused_ordering(652) 00:11:08.796 fused_ordering(653) 00:11:08.796 fused_ordering(654) 00:11:08.796 fused_ordering(655) 00:11:08.796 fused_ordering(656) 00:11:08.796 fused_ordering(657) 00:11:08.796 fused_ordering(658) 00:11:08.796 fused_ordering(659) 00:11:08.796 fused_ordering(660) 00:11:08.796 fused_ordering(661) 00:11:08.796 fused_ordering(662) 00:11:08.796 fused_ordering(663) 00:11:08.796 fused_ordering(664) 00:11:08.796 fused_ordering(665) 00:11:08.796 fused_ordering(666) 00:11:08.796 fused_ordering(667) 00:11:08.796 fused_ordering(668) 00:11:08.796 fused_ordering(669) 00:11:08.796 fused_ordering(670) 00:11:08.796 fused_ordering(671) 00:11:08.796 fused_ordering(672) 00:11:08.796 fused_ordering(673) 00:11:08.796 fused_ordering(674) 00:11:08.796 fused_ordering(675) 00:11:08.796 fused_ordering(676) 00:11:08.796 fused_ordering(677) 00:11:08.796 fused_ordering(678) 00:11:08.796 fused_ordering(679) 00:11:08.796 fused_ordering(680) 00:11:08.796 fused_ordering(681) 00:11:08.796 fused_ordering(682) 00:11:08.796 fused_ordering(683) 00:11:08.796 fused_ordering(684) 00:11:08.796 fused_ordering(685) 00:11:08.796 fused_ordering(686) 00:11:08.796 fused_ordering(687) 00:11:08.796 fused_ordering(688) 00:11:08.796 fused_ordering(689) 00:11:08.796 fused_ordering(690) 00:11:08.796 fused_ordering(691) 00:11:08.796 fused_ordering(692) 00:11:08.796 fused_ordering(693) 00:11:08.796 fused_ordering(694) 00:11:08.796 fused_ordering(695) 00:11:08.796 fused_ordering(696) 00:11:08.796 fused_ordering(697) 00:11:08.796 fused_ordering(698) 00:11:08.796 fused_ordering(699) 00:11:08.796 fused_ordering(700) 00:11:08.796 fused_ordering(701) 00:11:08.796 fused_ordering(702) 00:11:08.796 fused_ordering(703) 00:11:08.796 fused_ordering(704) 00:11:08.796 fused_ordering(705) 00:11:08.796 fused_ordering(706) 00:11:08.796 fused_ordering(707) 00:11:08.796 fused_ordering(708) 00:11:08.796 fused_ordering(709) 00:11:08.796 fused_ordering(710) 00:11:08.796 fused_ordering(711) 00:11:08.796 fused_ordering(712) 00:11:08.796 fused_ordering(713) 00:11:08.796 fused_ordering(714) 00:11:08.796 fused_ordering(715) 00:11:08.796 fused_ordering(716) 00:11:08.796 fused_ordering(717) 00:11:08.796 fused_ordering(718) 00:11:08.796 fused_ordering(719) 00:11:08.796 fused_ordering(720) 00:11:08.796 fused_ordering(721) 00:11:08.796 fused_ordering(722) 00:11:08.796 fused_ordering(723) 00:11:08.796 fused_ordering(724) 00:11:08.796 fused_ordering(725) 00:11:08.796 fused_ordering(726) 00:11:08.796 fused_ordering(727) 00:11:08.796 fused_ordering(728) 00:11:08.796 fused_ordering(729) 00:11:08.796 fused_ordering(730) 00:11:08.796 fused_ordering(731) 00:11:08.796 fused_ordering(732) 00:11:08.796 fused_ordering(733) 00:11:08.796 fused_ordering(734) 00:11:08.796 fused_ordering(735) 00:11:08.796 fused_ordering(736) 00:11:08.796 fused_ordering(737) 00:11:08.796 fused_ordering(738) 00:11:08.796 fused_ordering(739) 00:11:08.796 fused_ordering(740) 00:11:08.796 fused_ordering(741) 00:11:08.796 fused_ordering(742) 00:11:08.796 fused_ordering(743) 00:11:08.796 fused_ordering(744) 00:11:08.796 fused_ordering(745) 00:11:08.796 fused_ordering(746) 00:11:08.796 fused_ordering(747) 00:11:08.796 fused_ordering(748) 00:11:08.796 fused_ordering(749) 00:11:08.796 fused_ordering(750) 00:11:08.796 fused_ordering(751) 00:11:08.796 fused_ordering(752) 00:11:08.796 fused_ordering(753) 00:11:08.796 fused_ordering(754) 00:11:08.796 fused_ordering(755) 00:11:08.796 fused_ordering(756) 00:11:08.796 fused_ordering(757) 00:11:08.796 fused_ordering(758) 00:11:08.796 fused_ordering(759) 00:11:08.796 fused_ordering(760) 00:11:08.796 fused_ordering(761) 00:11:08.796 fused_ordering(762) 00:11:08.796 fused_ordering(763) 00:11:08.796 fused_ordering(764) 00:11:08.796 fused_ordering(765) 00:11:08.796 fused_ordering(766) 00:11:08.796 fused_ordering(767) 00:11:08.796 fused_ordering(768) 00:11:08.796 fused_ordering(769) 00:11:08.796 fused_ordering(770) 00:11:08.796 fused_ordering(771) 00:11:08.796 fused_ordering(772) 00:11:08.796 fused_ordering(773) 00:11:08.796 fused_ordering(774) 00:11:08.796 fused_ordering(775) 00:11:08.796 fused_ordering(776) 00:11:08.796 fused_ordering(777) 00:11:08.796 fused_ordering(778) 00:11:08.796 fused_ordering(779) 00:11:08.796 fused_ordering(780) 00:11:08.796 fused_ordering(781) 00:11:08.796 fused_ordering(782) 00:11:08.796 fused_ordering(783) 00:11:08.796 fused_ordering(784) 00:11:08.796 fused_ordering(785) 00:11:08.796 fused_ordering(786) 00:11:08.796 fused_ordering(787) 00:11:08.796 fused_ordering(788) 00:11:08.796 fused_ordering(789) 00:11:08.796 fused_ordering(790) 00:11:08.796 fused_ordering(791) 00:11:08.796 fused_ordering(792) 00:11:08.796 fused_ordering(793) 00:11:08.796 fused_ordering(794) 00:11:08.796 fused_ordering(795) 00:11:08.796 fused_ordering(796) 00:11:08.796 fused_ordering(797) 00:11:08.796 fused_ordering(798) 00:11:08.796 fused_ordering(799) 00:11:08.796 fused_ordering(800) 00:11:08.796 fused_ordering(801) 00:11:08.796 fused_ordering(802) 00:11:08.796 fused_ordering(803) 00:11:08.796 fused_ordering(804) 00:11:08.796 fused_ordering(805) 00:11:08.796 fused_ordering(806) 00:11:08.796 fused_ordering(807) 00:11:08.796 fused_ordering(808) 00:11:08.796 fused_ordering(809) 00:11:08.796 fused_ordering(810) 00:11:08.796 fused_ordering(811) 00:11:08.796 fused_ordering(812) 00:11:08.796 fused_ordering(813) 00:11:08.796 fused_ordering(814) 00:11:08.796 fused_ordering(815) 00:11:08.796 fused_ordering(816) 00:11:08.796 fused_ordering(817) 00:11:08.796 fused_ordering(818) 00:11:08.796 fused_ordering(819) 00:11:08.796 fused_ordering(820) 00:11:09.369 fused_ordering(821) 00:11:09.369 fused_ordering(822) 00:11:09.369 fused_ordering(823) 00:11:09.369 fused_ordering(824) 00:11:09.369 fused_ordering(825) 00:11:09.369 fused_ordering(826) 00:11:09.369 fused_ordering(827) 00:11:09.369 fused_ordering(828) 00:11:09.369 fused_ordering(829) 00:11:09.369 fused_ordering(830) 00:11:09.369 fused_ordering(831) 00:11:09.369 fused_ordering(832) 00:11:09.369 fused_ordering(833) 00:11:09.369 fused_ordering(834) 00:11:09.369 fused_ordering(835) 00:11:09.369 fused_ordering(836) 00:11:09.369 fused_ordering(837) 00:11:09.369 fused_ordering(838) 00:11:09.369 fused_ordering(839) 00:11:09.369 fused_ordering(840) 00:11:09.369 fused_ordering(841) 00:11:09.369 fused_ordering(842) 00:11:09.369 fused_ordering(843) 00:11:09.369 fused_ordering(844) 00:11:09.369 fused_ordering(845) 00:11:09.369 fused_ordering(846) 00:11:09.369 fused_ordering(847) 00:11:09.369 fused_ordering(848) 00:11:09.369 fused_ordering(849) 00:11:09.369 fused_ordering(850) 00:11:09.369 fused_ordering(851) 00:11:09.369 fused_ordering(852) 00:11:09.369 fused_ordering(853) 00:11:09.369 fused_ordering(854) 00:11:09.369 fused_ordering(855) 00:11:09.369 fused_ordering(856) 00:11:09.369 fused_ordering(857) 00:11:09.369 fused_ordering(858) 00:11:09.369 fused_ordering(859) 00:11:09.369 fused_ordering(860) 00:11:09.369 fused_ordering(861) 00:11:09.369 fused_ordering(862) 00:11:09.369 fused_ordering(863) 00:11:09.369 fused_ordering(864) 00:11:09.369 fused_ordering(865) 00:11:09.369 fused_ordering(866) 00:11:09.369 fused_ordering(867) 00:11:09.369 fused_ordering(868) 00:11:09.369 fused_ordering(869) 00:11:09.369 fused_ordering(870) 00:11:09.369 fused_ordering(871) 00:11:09.369 fused_ordering(872) 00:11:09.369 fused_ordering(873) 00:11:09.369 fused_ordering(874) 00:11:09.369 fused_ordering(875) 00:11:09.369 fused_ordering(876) 00:11:09.369 fused_ordering(877) 00:11:09.369 fused_ordering(878) 00:11:09.369 fused_ordering(879) 00:11:09.369 fused_ordering(880) 00:11:09.370 fused_ordering(881) 00:11:09.370 fused_ordering(882) 00:11:09.370 fused_ordering(883) 00:11:09.370 fused_ordering(884) 00:11:09.370 fused_ordering(885) 00:11:09.370 fused_ordering(886) 00:11:09.370 fused_ordering(887) 00:11:09.370 fused_ordering(888) 00:11:09.370 fused_ordering(889) 00:11:09.370 fused_ordering(890) 00:11:09.370 fused_ordering(891) 00:11:09.370 fused_ordering(892) 00:11:09.370 fused_ordering(893) 00:11:09.370 fused_ordering(894) 00:11:09.370 fused_ordering(895) 00:11:09.370 fused_ordering(896) 00:11:09.370 fused_ordering(897) 00:11:09.370 fused_ordering(898) 00:11:09.370 fused_ordering(899) 00:11:09.370 fused_ordering(900) 00:11:09.370 fused_ordering(901) 00:11:09.370 fused_ordering(902) 00:11:09.370 fused_ordering(903) 00:11:09.370 fused_ordering(904) 00:11:09.370 fused_ordering(905) 00:11:09.370 fused_ordering(906) 00:11:09.370 fused_ordering(907) 00:11:09.370 fused_ordering(908) 00:11:09.370 fused_ordering(909) 00:11:09.370 fused_ordering(910) 00:11:09.370 fused_ordering(911) 00:11:09.370 fused_ordering(912) 00:11:09.370 fused_ordering(913) 00:11:09.370 fused_ordering(914) 00:11:09.370 fused_ordering(915) 00:11:09.370 fused_ordering(916) 00:11:09.370 fused_ordering(917) 00:11:09.370 fused_ordering(918) 00:11:09.370 fused_ordering(919) 00:11:09.370 fused_ordering(920) 00:11:09.370 fused_ordering(921) 00:11:09.370 fused_ordering(922) 00:11:09.370 fused_ordering(923) 00:11:09.370 fused_ordering(924) 00:11:09.370 fused_ordering(925) 00:11:09.370 fused_ordering(926) 00:11:09.370 fused_ordering(927) 00:11:09.370 fused_ordering(928) 00:11:09.370 fused_ordering(929) 00:11:09.370 fused_ordering(930) 00:11:09.370 fused_ordering(931) 00:11:09.370 fused_ordering(932) 00:11:09.370 fused_ordering(933) 00:11:09.370 fused_ordering(934) 00:11:09.370 fused_ordering(935) 00:11:09.370 fused_ordering(936) 00:11:09.370 fused_ordering(937) 00:11:09.370 fused_ordering(938) 00:11:09.370 fused_ordering(939) 00:11:09.370 fused_ordering(940) 00:11:09.370 fused_ordering(941) 00:11:09.370 fused_ordering(942) 00:11:09.370 fused_ordering(943) 00:11:09.370 fused_ordering(944) 00:11:09.370 fused_ordering(945) 00:11:09.370 fused_ordering(946) 00:11:09.370 fused_ordering(947) 00:11:09.370 fused_ordering(948) 00:11:09.370 fused_ordering(949) 00:11:09.370 fused_ordering(950) 00:11:09.370 fused_ordering(951) 00:11:09.370 fused_ordering(952) 00:11:09.370 fused_ordering(953) 00:11:09.370 fused_ordering(954) 00:11:09.370 fused_ordering(955) 00:11:09.370 fused_ordering(956) 00:11:09.370 fused_ordering(957) 00:11:09.370 fused_ordering(958) 00:11:09.370 fused_ordering(959) 00:11:09.370 fused_ordering(960) 00:11:09.370 fused_ordering(961) 00:11:09.370 fused_ordering(962) 00:11:09.370 fused_ordering(963) 00:11:09.370 fused_ordering(964) 00:11:09.370 fused_ordering(965) 00:11:09.370 fused_ordering(966) 00:11:09.370 fused_ordering(967) 00:11:09.370 fused_ordering(968) 00:11:09.370 fused_ordering(969) 00:11:09.370 fused_ordering(970) 00:11:09.370 fused_ordering(971) 00:11:09.370 fused_ordering(972) 00:11:09.370 fused_ordering(973) 00:11:09.370 fused_ordering(974) 00:11:09.370 fused_ordering(975) 00:11:09.370 fused_ordering(976) 00:11:09.370 fused_ordering(977) 00:11:09.370 fused_ordering(978) 00:11:09.370 fused_ordering(979) 00:11:09.370 fused_ordering(980) 00:11:09.370 fused_ordering(981) 00:11:09.370 fused_ordering(982) 00:11:09.370 fused_ordering(983) 00:11:09.370 fused_ordering(984) 00:11:09.370 fused_ordering(985) 00:11:09.370 fused_ordering(986) 00:11:09.370 fused_ordering(987) 00:11:09.370 fused_ordering(988) 00:11:09.370 fused_ordering(989) 00:11:09.370 fused_ordering(990) 00:11:09.370 fused_ordering(991) 00:11:09.370 fused_ordering(992) 00:11:09.370 fused_ordering(993) 00:11:09.370 fused_ordering(994) 00:11:09.370 fused_ordering(995) 00:11:09.370 fused_ordering(996) 00:11:09.370 fused_ordering(997) 00:11:09.370 fused_ordering(998) 00:11:09.370 fused_ordering(999) 00:11:09.370 fused_ordering(1000) 00:11:09.370 fused_ordering(1001) 00:11:09.370 fused_ordering(1002) 00:11:09.370 fused_ordering(1003) 00:11:09.370 fused_ordering(1004) 00:11:09.370 fused_ordering(1005) 00:11:09.370 fused_ordering(1006) 00:11:09.370 fused_ordering(1007) 00:11:09.370 fused_ordering(1008) 00:11:09.370 fused_ordering(1009) 00:11:09.370 fused_ordering(1010) 00:11:09.370 fused_ordering(1011) 00:11:09.370 fused_ordering(1012) 00:11:09.370 fused_ordering(1013) 00:11:09.370 fused_ordering(1014) 00:11:09.370 fused_ordering(1015) 00:11:09.370 fused_ordering(1016) 00:11:09.370 fused_ordering(1017) 00:11:09.370 fused_ordering(1018) 00:11:09.370 fused_ordering(1019) 00:11:09.370 fused_ordering(1020) 00:11:09.370 fused_ordering(1021) 00:11:09.370 fused_ordering(1022) 00:11:09.370 fused_ordering(1023) 00:11:09.370 10:57:05 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:09.370 10:57:05 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:09.370 10:57:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:09.370 10:57:05 -- nvmf/common.sh@117 -- # sync 00:11:09.370 10:57:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.370 10:57:05 -- nvmf/common.sh@120 -- # set +e 00:11:09.370 10:57:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.370 10:57:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.370 rmmod nvme_tcp 00:11:09.370 rmmod nvme_fabrics 00:11:09.370 rmmod nvme_keyring 00:11:09.370 10:57:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.370 10:57:05 -- nvmf/common.sh@124 -- # set -e 00:11:09.370 10:57:05 -- nvmf/common.sh@125 -- # return 0 00:11:09.370 10:57:05 -- nvmf/common.sh@478 -- # '[' -n 234390 ']' 00:11:09.370 10:57:05 -- nvmf/common.sh@479 -- # killprocess 234390 00:11:09.370 10:57:05 -- common/autotest_common.sh@946 -- # '[' -z 234390 ']' 00:11:09.370 10:57:05 -- common/autotest_common.sh@950 -- # kill -0 234390 00:11:09.370 10:57:05 -- common/autotest_common.sh@951 -- # uname 00:11:09.370 10:57:05 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:09.370 10:57:05 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 234390 00:11:09.370 10:57:05 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:09.370 10:57:05 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:09.370 10:57:05 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 234390' 00:11:09.370 killing process with pid 234390 00:11:09.370 10:57:05 -- common/autotest_common.sh@965 -- # kill 234390 00:11:09.370 [2024-05-15 10:57:05.943395] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:09.370 10:57:05 -- common/autotest_common.sh@970 -- # wait 234390 00:11:09.632 10:57:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:09.632 10:57:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:09.632 10:57:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:09.632 10:57:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.632 10:57:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.632 10:57:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.632 10:57:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.632 10:57:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.545 10:57:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:11.545 00:11:11.545 real 0m12.876s 00:11:11.545 user 0m7.542s 00:11:11.545 sys 0m6.261s 00:11:11.545 10:57:08 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.545 10:57:08 -- common/autotest_common.sh@10 -- # set +x 00:11:11.545 ************************************ 00:11:11.545 END TEST nvmf_fused_ordering 00:11:11.545 ************************************ 00:11:11.807 10:57:08 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:11.807 10:57:08 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:11.807 10:57:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.807 10:57:08 -- common/autotest_common.sh@10 -- # set +x 00:11:11.807 ************************************ 00:11:11.807 START TEST nvmf_delete_subsystem 00:11:11.807 ************************************ 00:11:11.807 10:57:08 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:11.807 * Looking for test storage... 00:11:11.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.807 10:57:08 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.807 10:57:08 -- nvmf/common.sh@7 -- # uname -s 00:11:11.807 10:57:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.807 10:57:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.807 10:57:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.807 10:57:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.807 10:57:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.807 10:57:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.807 10:57:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.807 10:57:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.807 10:57:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.807 10:57:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.807 10:57:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:11.807 10:57:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:11.807 10:57:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.807 10:57:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.807 10:57:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.807 10:57:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.807 10:57:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.807 10:57:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.807 10:57:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.807 10:57:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.807 10:57:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.807 10:57:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.807 10:57:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.807 10:57:08 -- paths/export.sh@5 -- # export PATH 00:11:11.807 10:57:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.807 10:57:08 -- nvmf/common.sh@47 -- # : 0 00:11:11.807 10:57:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.807 10:57:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.807 10:57:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.807 10:57:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.807 10:57:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.807 10:57:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.807 10:57:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.807 10:57:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.807 10:57:08 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:11.807 10:57:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:11.808 10:57:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.808 10:57:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:11.808 10:57:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:11.808 10:57:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:11.808 10:57:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.808 10:57:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.808 10:57:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.808 10:57:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:11.808 10:57:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:11.808 10:57:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.808 10:57:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 10:57:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:18.402 10:57:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.402 10:57:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.402 10:57:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.402 10:57:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.402 10:57:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.402 10:57:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.402 10:57:14 -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.402 10:57:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.402 10:57:14 -- nvmf/common.sh@296 -- # e810=() 00:11:18.402 10:57:14 -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.402 10:57:14 -- nvmf/common.sh@297 -- # x722=() 00:11:18.402 10:57:14 -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.402 10:57:14 -- nvmf/common.sh@298 -- # mlx=() 00:11:18.402 10:57:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.402 10:57:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.402 10:57:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.402 10:57:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.402 10:57:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.402 10:57:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.402 10:57:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:18.402 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:18.402 10:57:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.402 10:57:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:18.402 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:18.402 10:57:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.402 10:57:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.402 10:57:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.402 10:57:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.402 10:57:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.402 10:57:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.402 10:57:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:18.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:18.403 10:57:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.403 10:57:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.403 10:57:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.403 10:57:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.403 10:57:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.403 10:57:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:18.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:18.403 10:57:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.403 10:57:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:18.403 10:57:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:18.403 10:57:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:18.403 10:57:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:18.403 10:57:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:18.403 10:57:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.403 10:57:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.403 10:57:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.403 10:57:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.403 10:57:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.403 10:57:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.403 10:57:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.403 10:57:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.403 10:57:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.403 10:57:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.403 10:57:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.403 10:57:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.403 10:57:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.403 10:57:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.403 10:57:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.403 10:57:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.403 10:57:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.663 10:57:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.663 10:57:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.664 10:57:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:11:18.664 00:11:18.664 --- 10.0.0.2 ping statistics --- 00:11:18.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.664 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:11:18.664 10:57:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:11:18.664 00:11:18.664 --- 10.0.0.1 ping statistics --- 00:11:18.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.664 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:18.664 10:57:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.664 10:57:15 -- nvmf/common.sh@411 -- # return 0 00:11:18.664 10:57:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:18.664 10:57:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.664 10:57:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:18.664 10:57:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:18.664 10:57:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.664 10:57:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:18.664 10:57:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:18.664 10:57:15 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:18.664 10:57:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:18.664 10:57:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:18.664 10:57:15 -- common/autotest_common.sh@10 -- # set +x 00:11:18.664 10:57:15 -- nvmf/common.sh@470 -- # nvmfpid=239751 00:11:18.664 10:57:15 -- nvmf/common.sh@471 -- # waitforlisten 239751 00:11:18.664 10:57:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:18.664 10:57:15 -- common/autotest_common.sh@827 -- # '[' -z 239751 ']' 00:11:18.664 10:57:15 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.664 10:57:15 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:18.664 10:57:15 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.664 10:57:15 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:18.664 10:57:15 -- common/autotest_common.sh@10 -- # set +x 00:11:18.664 [2024-05-15 10:57:15.200702] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:11:18.664 [2024-05-15 10:57:15.200750] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.664 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.664 [2024-05-15 10:57:15.265052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:18.925 [2024-05-15 10:57:15.331308] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.925 [2024-05-15 10:57:15.331343] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.925 [2024-05-15 10:57:15.331351] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.925 [2024-05-15 10:57:15.331358] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.925 [2024-05-15 10:57:15.331364] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.925 [2024-05-15 10:57:15.331499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.925 [2024-05-15 10:57:15.331500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.496 10:57:15 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:19.496 10:57:15 -- common/autotest_common.sh@860 -- # return 0 00:11:19.496 10:57:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:19.496 10:57:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.496 10:57:15 -- common/autotest_common.sh@10 -- # set +x 00:11:19.496 10:57:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.496 10:57:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.496 10:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:19.496 [2024-05-15 10:57:16.023022] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.496 10:57:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.496 10:57:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.496 10:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:19.496 10:57:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.496 10:57:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.496 10:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:19.496 [2024-05-15 10:57:16.039007] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:19.496 [2024-05-15 10:57:16.039189] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.496 10:57:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:19.496 10:57:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.496 10:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:19.496 NULL1 00:11:19.496 10:57:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:19.496 10:57:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.496 10:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:19.496 Delay0 00:11:19.496 10:57:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.496 10:57:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.496 10:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:19.496 10:57:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@28 -- # perf_pid=239866 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:19.496 10:57:16 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:19.496 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.496 [2024-05-15 10:57:16.123893] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:22.044 10:57:18 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.044 10:57:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.044 10:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 [2024-05-15 10:57:18.367683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6676a0 is same with the state(5) to be set 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 [2024-05-15 10:57:18.368927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6680c0 is same with the state(5) to be set 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.044 starting I/O failed: -6 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Read completed with error (sct=0, sc=8) 00:11:22.044 Write completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 starting I/O failed: -6 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 starting I/O failed: -6 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 starting I/O failed: -6 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 starting I/O failed: -6 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 starting I/O failed: -6 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 [2024-05-15 10:57:18.371610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f96c800c470 is same with the state(5) to be set 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Read completed with error (sct=0, sc=8) 00:11:22.045 Write completed with error (sct=0, sc=8) 00:11:22.989 [2024-05-15 10:57:19.346674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x667060 is same with the state(5) to be set 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 [2024-05-15 10:57:19.370904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668f10 is same with the state(5) to be set 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 [2024-05-15 10:57:19.371261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66fc20 is same with the state(5) to be set 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 [2024-05-15 10:57:19.373579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f96c800c780 is same with the state(5) to be set 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Write completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 Read completed with error (sct=0, sc=8) 00:11:22.989 [2024-05-15 10:57:19.373978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f96c800bfe0 is same with the state(5) to be set 00:11:22.989 [2024-05-15 10:57:19.374469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x667060 (9): Bad file descriptor 00:11:22.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:22.989 10:57:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.989 Initializing NVMe Controllers 00:11:22.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.989 Controller IO queue size 128, less than required. 00:11:22.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:22.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:22.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:22.989 Initialization complete. Launching workers. 00:11:22.989 ======================================================== 00:11:22.989 Latency(us) 00:11:22.989 Device Information : IOPS MiB/s Average min max 00:11:22.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.94 0.08 898981.34 630.69 1005987.30 00:11:22.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.96 0.08 951743.22 257.47 2002405.46 00:11:22.989 ======================================================== 00:11:22.989 Total : 330.90 0.16 924964.98 257.47 2002405.46 00:11:22.989 00:11:22.989 10:57:19 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:22.989 10:57:19 -- target/delete_subsystem.sh@35 -- # kill -0 239866 00:11:22.989 10:57:19 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:23.251 10:57:19 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:23.251 10:57:19 -- target/delete_subsystem.sh@35 -- # kill -0 239866 00:11:23.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (239866) - No such process 00:11:23.251 10:57:19 -- target/delete_subsystem.sh@45 -- # NOT wait 239866 00:11:23.251 10:57:19 -- common/autotest_common.sh@648 -- # local es=0 00:11:23.251 10:57:19 -- common/autotest_common.sh@650 -- # valid_exec_arg wait 239866 00:11:23.251 10:57:19 -- common/autotest_common.sh@636 -- # local arg=wait 00:11:23.251 10:57:19 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.251 10:57:19 -- common/autotest_common.sh@640 -- # type -t wait 00:11:23.251 10:57:19 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.251 10:57:19 -- common/autotest_common.sh@651 -- # wait 239866 00:11:23.251 10:57:19 -- common/autotest_common.sh@651 -- # es=1 00:11:23.251 10:57:19 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:23.251 10:57:19 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:23.251 10:57:19 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:23.251 10:57:19 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:23.251 10:57:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.251 10:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.251 10:57:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.251 10:57:19 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.251 10:57:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.251 10:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.511 [2024-05-15 10:57:19.906702] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.511 10:57:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.511 10:57:19 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.511 10:57:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.511 10:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.511 10:57:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.511 10:57:19 -- target/delete_subsystem.sh@54 -- # perf_pid=240776 00:11:23.511 10:57:19 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:23.511 10:57:19 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:23.511 10:57:19 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:23.511 10:57:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.511 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.511 [2024-05-15 10:57:19.974632] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:24.083 10:57:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.083 10:57:20 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:24.083 10:57:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.344 10:57:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.344 10:57:20 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:24.344 10:57:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.916 10:57:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.916 10:57:21 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:24.916 10:57:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.488 10:57:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.488 10:57:21 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:25.488 10:57:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.059 10:57:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.059 10:57:22 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:26.059 10:57:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.318 10:57:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.318 10:57:22 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:26.319 10:57:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.579 Initializing NVMe Controllers 00:11:26.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:26.579 Controller IO queue size 128, less than required. 00:11:26.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:26.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:26.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:26.579 Initialization complete. Launching workers. 00:11:26.579 ======================================================== 00:11:26.579 Latency(us) 00:11:26.579 Device Information : IOPS MiB/s Average min max 00:11:26.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001884.28 1000158.54 1004991.43 00:11:26.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002871.86 1000297.38 1008958.60 00:11:26.579 ======================================================== 00:11:26.579 Total : 256.00 0.12 1002378.07 1000158.54 1008958.60 00:11:26.579 00:11:26.839 10:57:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.839 10:57:23 -- target/delete_subsystem.sh@57 -- # kill -0 240776 00:11:26.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (240776) - No such process 00:11:26.839 10:57:23 -- target/delete_subsystem.sh@67 -- # wait 240776 00:11:26.839 10:57:23 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:26.839 10:57:23 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:26.839 10:57:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:26.839 10:57:23 -- nvmf/common.sh@117 -- # sync 00:11:26.839 10:57:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.839 10:57:23 -- nvmf/common.sh@120 -- # set +e 00:11:26.839 10:57:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.839 10:57:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.839 rmmod nvme_tcp 00:11:26.839 rmmod nvme_fabrics 00:11:27.099 rmmod nvme_keyring 00:11:27.099 10:57:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.099 10:57:23 -- nvmf/common.sh@124 -- # set -e 00:11:27.099 10:57:23 -- nvmf/common.sh@125 -- # return 0 00:11:27.099 10:57:23 -- nvmf/common.sh@478 -- # '[' -n 239751 ']' 00:11:27.099 10:57:23 -- nvmf/common.sh@479 -- # killprocess 239751 00:11:27.099 10:57:23 -- common/autotest_common.sh@946 -- # '[' -z 239751 ']' 00:11:27.099 10:57:23 -- common/autotest_common.sh@950 -- # kill -0 239751 00:11:27.099 10:57:23 -- common/autotest_common.sh@951 -- # uname 00:11:27.099 10:57:23 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:27.099 10:57:23 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 239751 00:11:27.100 10:57:23 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:27.100 10:57:23 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:27.100 10:57:23 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 239751' 00:11:27.100 killing process with pid 239751 00:11:27.100 10:57:23 -- common/autotest_common.sh@965 -- # kill 239751 00:11:27.100 [2024-05-15 10:57:23.574434] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:27.100 10:57:23 -- common/autotest_common.sh@970 -- # wait 239751 00:11:27.100 10:57:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:27.100 10:57:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:27.100 10:57:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:27.100 10:57:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.100 10:57:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.100 10:57:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.100 10:57:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.100 10:57:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.645 10:57:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.645 00:11:29.645 real 0m17.530s 00:11:29.645 user 0m30.914s 00:11:29.645 sys 0m5.847s 00:11:29.645 10:57:25 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:29.645 10:57:25 -- common/autotest_common.sh@10 -- # set +x 00:11:29.645 ************************************ 00:11:29.645 END TEST nvmf_delete_subsystem 00:11:29.645 ************************************ 00:11:29.645 10:57:25 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:29.645 10:57:25 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:29.645 10:57:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:29.645 10:57:25 -- common/autotest_common.sh@10 -- # set +x 00:11:29.645 ************************************ 00:11:29.645 START TEST nvmf_ns_masking 00:11:29.645 ************************************ 00:11:29.645 10:57:25 -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:29.645 * Looking for test storage... 00:11:29.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.645 10:57:25 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.645 10:57:25 -- nvmf/common.sh@7 -- # uname -s 00:11:29.645 10:57:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.645 10:57:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.645 10:57:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.645 10:57:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.645 10:57:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.645 10:57:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.645 10:57:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.645 10:57:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.645 10:57:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.645 10:57:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.645 10:57:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:29.645 10:57:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:29.645 10:57:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.645 10:57:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.645 10:57:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.645 10:57:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.645 10:57:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.645 10:57:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.645 10:57:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.645 10:57:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.645 10:57:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.645 10:57:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.645 10:57:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.645 10:57:25 -- paths/export.sh@5 -- # export PATH 00:11:29.645 10:57:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.645 10:57:25 -- nvmf/common.sh@47 -- # : 0 00:11:29.645 10:57:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.645 10:57:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.645 10:57:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.645 10:57:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.645 10:57:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.645 10:57:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.645 10:57:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.645 10:57:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.645 10:57:25 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.645 10:57:25 -- target/ns_masking.sh@11 -- # loops=5 00:11:29.645 10:57:25 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:29.645 10:57:25 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:29.645 10:57:25 -- target/ns_masking.sh@15 -- # uuidgen 00:11:29.645 10:57:26 -- target/ns_masking.sh@15 -- # HOSTID=739e0402-a2e3-40f9-a85d-fe1f8292a59d 00:11:29.645 10:57:26 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:29.645 10:57:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:29.645 10:57:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.646 10:57:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:29.646 10:57:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:29.646 10:57:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:29.646 10:57:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.646 10:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.646 10:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.646 10:57:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:29.646 10:57:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:29.646 10:57:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.646 10:57:26 -- common/autotest_common.sh@10 -- # set +x 00:11:36.236 10:57:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:36.236 10:57:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.236 10:57:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.236 10:57:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.236 10:57:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.236 10:57:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.236 10:57:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.236 10:57:32 -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.236 10:57:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.236 10:57:32 -- nvmf/common.sh@296 -- # e810=() 00:11:36.236 10:57:32 -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.236 10:57:32 -- nvmf/common.sh@297 -- # x722=() 00:11:36.236 10:57:32 -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.236 10:57:32 -- nvmf/common.sh@298 -- # mlx=() 00:11:36.236 10:57:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.236 10:57:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.236 10:57:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.236 10:57:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:36.236 10:57:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.236 10:57:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.236 10:57:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:36.236 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:36.236 10:57:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.236 10:57:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:36.236 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:36.236 10:57:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.236 10:57:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.236 10:57:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.236 10:57:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:36.236 10:57:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.236 10:57:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:36.236 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:36.236 10:57:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.236 10:57:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.236 10:57:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.236 10:57:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:36.236 10:57:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.236 10:57:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:36.236 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:36.236 10:57:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.236 10:57:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:36.236 10:57:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:36.236 10:57:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:36.236 10:57:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:36.236 10:57:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.236 10:57:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.236 10:57:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.236 10:57:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:36.236 10:57:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.236 10:57:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.236 10:57:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:36.236 10:57:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.236 10:57:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.236 10:57:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:36.236 10:57:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:36.236 10:57:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.236 10:57:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.236 10:57:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.236 10:57:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.236 10:57:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:36.498 10:57:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.498 10:57:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.498 10:57:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.498 10:57:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:36.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:11:36.498 00:11:36.498 --- 10.0.0.2 ping statistics --- 00:11:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.498 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:11:36.498 10:57:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:11:36.498 00:11:36.498 --- 10.0.0.1 ping statistics --- 00:11:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.498 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:11:36.498 10:57:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.498 10:57:33 -- nvmf/common.sh@411 -- # return 0 00:11:36.498 10:57:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:36.498 10:57:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.498 10:57:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:36.498 10:57:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:36.498 10:57:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.498 10:57:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:36.498 10:57:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:36.498 10:57:33 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:36.498 10:57:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:36.498 10:57:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:36.498 10:57:33 -- common/autotest_common.sh@10 -- # set +x 00:11:36.498 10:57:33 -- nvmf/common.sh@470 -- # nvmfpid=245475 00:11:36.498 10:57:33 -- nvmf/common.sh@471 -- # waitforlisten 245475 00:11:36.498 10:57:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.498 10:57:33 -- common/autotest_common.sh@827 -- # '[' -z 245475 ']' 00:11:36.498 10:57:33 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.498 10:57:33 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:36.498 10:57:33 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.498 10:57:33 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:36.498 10:57:33 -- common/autotest_common.sh@10 -- # set +x 00:11:36.498 [2024-05-15 10:57:33.107995] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:11:36.498 [2024-05-15 10:57:33.108060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.758 [2024-05-15 10:57:33.177010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.758 [2024-05-15 10:57:33.252416] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.758 [2024-05-15 10:57:33.252454] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.758 [2024-05-15 10:57:33.252463] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.758 [2024-05-15 10:57:33.252469] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.758 [2024-05-15 10:57:33.252475] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.758 [2024-05-15 10:57:33.252544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.758 [2024-05-15 10:57:33.252660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.758 [2024-05-15 10:57:33.252707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.758 [2024-05-15 10:57:33.252709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.328 10:57:33 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:37.328 10:57:33 -- common/autotest_common.sh@860 -- # return 0 00:11:37.328 10:57:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:37.328 10:57:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.328 10:57:33 -- common/autotest_common.sh@10 -- # set +x 00:11:37.328 10:57:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.328 10:57:33 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:37.588 [2024-05-15 10:57:34.067505] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.588 10:57:34 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:37.588 10:57:34 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:37.588 10:57:34 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:37.848 Malloc1 00:11:37.848 10:57:34 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:37.848 Malloc2 00:11:37.848 10:57:34 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:38.109 10:57:34 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:38.370 10:57:34 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.370 [2024-05-15 10:57:34.913620] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:38.370 [2024-05-15 10:57:34.913842] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.370 10:57:34 -- target/ns_masking.sh@61 -- # connect 00:11:38.370 10:57:34 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 739e0402-a2e3-40f9-a85d-fe1f8292a59d -a 10.0.0.2 -s 4420 -i 4 00:11:38.631 10:57:35 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.631 10:57:35 -- common/autotest_common.sh@1194 -- # local i=0 00:11:38.631 10:57:35 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.631 10:57:35 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:38.631 10:57:35 -- common/autotest_common.sh@1201 -- # sleep 2 00:11:40.543 10:57:37 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:40.543 10:57:37 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:40.543 10:57:37 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.543 10:57:37 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:40.543 10:57:37 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.543 10:57:37 -- common/autotest_common.sh@1204 -- # return 0 00:11:40.543 10:57:37 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:40.543 10:57:37 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:40.543 10:57:37 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:40.543 10:57:37 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:40.543 10:57:37 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:40.543 10:57:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.543 10:57:37 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.543 [ 0]:0x1 00:11:40.804 10:57:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.804 10:57:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.804 10:57:37 -- target/ns_masking.sh@40 -- # nguid=e232b78c95d4417497cc84b0e1fec436 00:11:40.804 10:57:37 -- target/ns_masking.sh@41 -- # [[ e232b78c95d4417497cc84b0e1fec436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.804 10:57:37 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:40.804 10:57:37 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:40.804 10:57:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.804 10:57:37 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.804 [ 0]:0x1 00:11:40.804 10:57:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.804 10:57:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.804 10:57:37 -- target/ns_masking.sh@40 -- # nguid=e232b78c95d4417497cc84b0e1fec436 00:11:40.804 10:57:37 -- target/ns_masking.sh@41 -- # [[ e232b78c95d4417497cc84b0e1fec436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.804 10:57:37 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:41.065 10:57:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:41.065 10:57:37 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:41.065 [ 1]:0x2 00:11:41.065 10:57:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.065 10:57:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:41.065 10:57:37 -- target/ns_masking.sh@40 -- # nguid=dd5078d9b9644b06b82c66365c44ace9 00:11:41.065 10:57:37 -- target/ns_masking.sh@41 -- # [[ dd5078d9b9644b06b82c66365c44ace9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.065 10:57:37 -- target/ns_masking.sh@69 -- # disconnect 00:11:41.065 10:57:37 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.065 10:57:37 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.326 10:57:37 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:41.586 10:57:37 -- target/ns_masking.sh@77 -- # connect 1 00:11:41.586 10:57:37 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 739e0402-a2e3-40f9-a85d-fe1f8292a59d -a 10.0.0.2 -s 4420 -i 4 00:11:41.586 10:57:38 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:41.586 10:57:38 -- common/autotest_common.sh@1194 -- # local i=0 00:11:41.586 10:57:38 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.586 10:57:38 -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:41.586 10:57:38 -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:41.586 10:57:38 -- common/autotest_common.sh@1201 -- # sleep 2 00:11:44.134 10:57:40 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:44.134 10:57:40 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:44.134 10:57:40 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.134 10:57:40 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:44.134 10:57:40 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.134 10:57:40 -- common/autotest_common.sh@1204 -- # return 0 00:11:44.134 10:57:40 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:44.134 10:57:40 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:44.134 10:57:40 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:44.134 10:57:40 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:44.134 10:57:40 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:44.134 10:57:40 -- common/autotest_common.sh@648 -- # local es=0 00:11:44.134 10:57:40 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:44.134 10:57:40 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:44.134 10:57:40 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.134 10:57:40 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:44.134 10:57:40 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.134 10:57:40 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:44.134 10:57:40 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.134 10:57:40 -- common/autotest_common.sh@651 -- # es=1 00:11:44.134 10:57:40 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.134 10:57:40 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.134 10:57:40 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.134 10:57:40 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:44.134 [ 0]:0x2 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nguid=dd5078d9b9644b06b82c66365c44ace9 00:11:44.134 10:57:40 -- target/ns_masking.sh@41 -- # [[ dd5078d9b9644b06b82c66365c44ace9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.134 10:57:40 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.134 10:57:40 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:44.134 [ 0]:0x1 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nguid=e232b78c95d4417497cc84b0e1fec436 00:11:44.134 10:57:40 -- target/ns_masking.sh@41 -- # [[ e232b78c95d4417497cc84b0e1fec436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.134 10:57:40 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.134 10:57:40 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:44.134 [ 1]:0x2 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.134 10:57:40 -- target/ns_masking.sh@40 -- # nguid=dd5078d9b9644b06b82c66365c44ace9 00:11:44.134 10:57:40 -- target/ns_masking.sh@41 -- # [[ dd5078d9b9644b06b82c66365c44ace9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.134 10:57:40 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.395 10:57:40 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:44.395 10:57:40 -- common/autotest_common.sh@648 -- # local es=0 00:11:44.395 10:57:40 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:44.395 10:57:40 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:44.395 10:57:40 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.395 10:57:40 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:44.396 10:57:40 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.396 10:57:40 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:44.396 10:57:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.396 10:57:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:44.396 10:57:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.396 10:57:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.396 10:57:40 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:44.396 10:57:40 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.396 10:57:40 -- common/autotest_common.sh@651 -- # es=1 00:11:44.396 10:57:40 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.396 10:57:40 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.396 10:57:40 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.396 10:57:40 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:44.396 10:57:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.396 10:57:40 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:44.396 [ 0]:0x2 00:11:44.396 10:57:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.396 10:57:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.396 10:57:41 -- target/ns_masking.sh@40 -- # nguid=dd5078d9b9644b06b82c66365c44ace9 00:11:44.396 10:57:41 -- target/ns_masking.sh@41 -- # [[ dd5078d9b9644b06b82c66365c44ace9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.396 10:57:41 -- target/ns_masking.sh@91 -- # disconnect 00:11:44.396 10:57:41 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.656 10:57:41 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.656 10:57:41 -- target/ns_masking.sh@95 -- # connect 2 00:11:44.656 10:57:41 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 739e0402-a2e3-40f9-a85d-fe1f8292a59d -a 10.0.0.2 -s 4420 -i 4 00:11:44.915 10:57:41 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:44.915 10:57:41 -- common/autotest_common.sh@1194 -- # local i=0 00:11:44.915 10:57:41 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.915 10:57:41 -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:44.915 10:57:41 -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:44.915 10:57:41 -- common/autotest_common.sh@1201 -- # sleep 2 00:11:46.830 10:57:43 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:46.830 10:57:43 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:46.830 10:57:43 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.830 10:57:43 -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:46.830 10:57:43 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.830 10:57:43 -- common/autotest_common.sh@1204 -- # return 0 00:11:46.830 10:57:43 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:46.830 10:57:43 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:46.830 10:57:43 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:46.830 10:57:43 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:46.830 10:57:43 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:46.830 10:57:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.830 10:57:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.830 [ 0]:0x1 00:11:47.090 10:57:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.090 10:57:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.090 10:57:43 -- target/ns_masking.sh@40 -- # nguid=e232b78c95d4417497cc84b0e1fec436 00:11:47.090 10:57:43 -- target/ns_masking.sh@41 -- # [[ e232b78c95d4417497cc84b0e1fec436 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.090 10:57:43 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:47.090 10:57:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.090 10:57:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.090 [ 1]:0x2 00:11:47.090 10:57:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.090 10:57:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.090 10:57:43 -- target/ns_masking.sh@40 -- # nguid=dd5078d9b9644b06b82c66365c44ace9 00:11:47.090 10:57:43 -- target/ns_masking.sh@41 -- # [[ dd5078d9b9644b06b82c66365c44ace9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.090 10:57:43 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:47.352 10:57:43 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:47.352 10:57:43 -- common/autotest_common.sh@648 -- # local es=0 00:11:47.352 10:57:43 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:47.352 10:57:43 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:47.352 10:57:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.352 10:57:43 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:47.352 10:57:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.352 10:57:43 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:47.352 10:57:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.352 10:57:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.352 10:57:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.352 10:57:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.352 10:57:43 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:47.352 10:57:43 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.352 10:57:43 -- common/autotest_common.sh@651 -- # es=1 00:11:47.352 10:57:43 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.352 10:57:43 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.352 10:57:43 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.352 10:57:43 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:47.352 10:57:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.352 10:57:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.352 [ 0]:0x2 00:11:47.352 10:57:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.352 10:57:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.352 10:57:43 -- target/ns_masking.sh@40 -- # nguid=dd5078d9b9644b06b82c66365c44ace9 00:11:47.352 10:57:43 -- target/ns_masking.sh@41 -- # [[ dd5078d9b9644b06b82c66365c44ace9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.352 10:57:43 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:47.352 10:57:43 -- common/autotest_common.sh@648 -- # local es=0 00:11:47.352 10:57:43 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:47.352 10:57:43 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.352 10:57:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.352 10:57:43 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.352 10:57:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.352 10:57:43 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.352 10:57:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.352 10:57:43 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.352 10:57:43 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:47.352 10:57:43 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:47.612 [2024-05-15 10:57:44.014103] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:47.612 request: 00:11:47.612 { 00:11:47.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.612 "nsid": 2, 00:11:47.612 "host": "nqn.2016-06.io.spdk:host1", 00:11:47.612 "method": "nvmf_ns_remove_host", 00:11:47.612 "req_id": 1 00:11:47.612 } 00:11:47.612 Got JSON-RPC error response 00:11:47.612 response: 00:11:47.612 { 00:11:47.612 "code": -32602, 00:11:47.612 "message": "Invalid parameters" 00:11:47.612 } 00:11:47.612 10:57:44 -- common/autotest_common.sh@651 -- # es=1 00:11:47.612 10:57:44 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.612 10:57:44 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.612 10:57:44 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.612 10:57:44 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:47.612 10:57:44 -- common/autotest_common.sh@648 -- # local es=0 00:11:47.612 10:57:44 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:47.612 10:57:44 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:47.612 10:57:44 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.613 10:57:44 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:47.613 10:57:44 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.613 10:57:44 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:47.613 10:57:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.613 10:57:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.613 10:57:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.613 10:57:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.613 10:57:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:47.613 10:57:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.613 10:57:44 -- common/autotest_common.sh@651 -- # es=1 00:11:47.613 10:57:44 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.613 10:57:44 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.613 10:57:44 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.613 10:57:44 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:47.613 10:57:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.613 10:57:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.613 [ 0]:0x2 00:11:47.613 10:57:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.613 10:57:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.613 10:57:44 -- target/ns_masking.sh@40 -- # nguid=dd5078d9b9644b06b82c66365c44ace9 00:11:47.613 10:57:44 -- target/ns_masking.sh@41 -- # [[ dd5078d9b9644b06b82c66365c44ace9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.613 10:57:44 -- target/ns_masking.sh@108 -- # disconnect 00:11:47.613 10:57:44 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.873 10:57:44 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.873 10:57:44 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:47.873 10:57:44 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:47.873 10:57:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:47.873 10:57:44 -- nvmf/common.sh@117 -- # sync 00:11:47.873 10:57:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.873 10:57:44 -- nvmf/common.sh@120 -- # set +e 00:11:47.873 10:57:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.873 10:57:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.873 rmmod nvme_tcp 00:11:48.134 rmmod nvme_fabrics 00:11:48.134 rmmod nvme_keyring 00:11:48.134 10:57:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.134 10:57:44 -- nvmf/common.sh@124 -- # set -e 00:11:48.134 10:57:44 -- nvmf/common.sh@125 -- # return 0 00:11:48.134 10:57:44 -- nvmf/common.sh@478 -- # '[' -n 245475 ']' 00:11:48.134 10:57:44 -- nvmf/common.sh@479 -- # killprocess 245475 00:11:48.134 10:57:44 -- common/autotest_common.sh@946 -- # '[' -z 245475 ']' 00:11:48.134 10:57:44 -- common/autotest_common.sh@950 -- # kill -0 245475 00:11:48.134 10:57:44 -- common/autotest_common.sh@951 -- # uname 00:11:48.134 10:57:44 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:48.134 10:57:44 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 245475 00:11:48.134 10:57:44 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:48.134 10:57:44 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:48.134 10:57:44 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 245475' 00:11:48.134 killing process with pid 245475 00:11:48.134 10:57:44 -- common/autotest_common.sh@965 -- # kill 245475 00:11:48.134 [2024-05-15 10:57:44.634435] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:48.134 10:57:44 -- common/autotest_common.sh@970 -- # wait 245475 00:11:48.395 10:57:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:48.395 10:57:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:48.395 10:57:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:48.395 10:57:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.395 10:57:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.395 10:57:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.395 10:57:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.395 10:57:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.309 10:57:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.309 00:11:50.309 real 0m21.001s 00:11:50.309 user 0m50.604s 00:11:50.309 sys 0m6.733s 00:11:50.309 10:57:46 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:50.309 10:57:46 -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 ************************************ 00:11:50.309 END TEST nvmf_ns_masking 00:11:50.309 ************************************ 00:11:50.309 10:57:46 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:50.309 10:57:46 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.309 10:57:46 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:50.310 10:57:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.310 10:57:46 -- common/autotest_common.sh@10 -- # set +x 00:11:50.310 ************************************ 00:11:50.310 START TEST nvmf_nvme_cli 00:11:50.310 ************************************ 00:11:50.310 10:57:46 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.571 * Looking for test storage... 00:11:50.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.571 10:57:47 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.571 10:57:47 -- nvmf/common.sh@7 -- # uname -s 00:11:50.571 10:57:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.571 10:57:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.571 10:57:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.571 10:57:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.571 10:57:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.571 10:57:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.571 10:57:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.571 10:57:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.571 10:57:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.571 10:57:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.572 10:57:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:50.572 10:57:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:50.572 10:57:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.572 10:57:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.572 10:57:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.572 10:57:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.572 10:57:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.572 10:57:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.572 10:57:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.572 10:57:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.572 10:57:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.572 10:57:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.572 10:57:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.572 10:57:47 -- paths/export.sh@5 -- # export PATH 00:11:50.572 10:57:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.572 10:57:47 -- nvmf/common.sh@47 -- # : 0 00:11:50.572 10:57:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.572 10:57:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.572 10:57:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.572 10:57:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.572 10:57:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.572 10:57:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.572 10:57:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.572 10:57:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.572 10:57:47 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.572 10:57:47 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.572 10:57:47 -- target/nvme_cli.sh@14 -- # devs=() 00:11:50.572 10:57:47 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:50.572 10:57:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:50.572 10:57:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.572 10:57:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:50.572 10:57:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:50.572 10:57:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:50.572 10:57:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.572 10:57:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.572 10:57:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.572 10:57:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:50.572 10:57:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:50.572 10:57:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.572 10:57:47 -- common/autotest_common.sh@10 -- # set +x 00:11:57.162 10:57:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:57.162 10:57:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.162 10:57:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.162 10:57:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.162 10:57:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.162 10:57:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.162 10:57:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.162 10:57:53 -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.162 10:57:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.162 10:57:53 -- nvmf/common.sh@296 -- # e810=() 00:11:57.162 10:57:53 -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.162 10:57:53 -- nvmf/common.sh@297 -- # x722=() 00:11:57.162 10:57:53 -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.162 10:57:53 -- nvmf/common.sh@298 -- # mlx=() 00:11:57.162 10:57:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.162 10:57:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.162 10:57:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.162 10:57:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:57.162 10:57:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.162 10:57:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.162 10:57:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:57.162 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:57.162 10:57:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.162 10:57:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:57.162 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:57.162 10:57:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.162 10:57:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.163 10:57:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.163 10:57:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.163 10:57:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.163 10:57:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:57.163 10:57:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:57.163 10:57:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.163 10:57:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.163 10:57:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:57.163 10:57:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.163 10:57:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:57.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:57.163 10:57:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.163 10:57:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.163 10:57:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.163 10:57:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:57.163 10:57:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.163 10:57:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:57.163 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:57.163 10:57:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.163 10:57:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:57.163 10:57:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:57.163 10:57:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:57.163 10:57:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:57.163 10:57:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:57.163 10:57:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.163 10:57:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.163 10:57:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.163 10:57:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:57.163 10:57:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.163 10:57:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.163 10:57:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:57.163 10:57:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.163 10:57:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.163 10:57:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:57.163 10:57:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:57.163 10:57:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.163 10:57:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.424 10:57:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.424 10:57:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.424 10:57:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:57.424 10:57:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.424 10:57:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.424 10:57:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.424 10:57:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:57.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:11:57.424 00:11:57.424 --- 10.0.0.2 ping statistics --- 00:11:57.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.424 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:11:57.424 10:57:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:11:57.424 00:11:57.424 --- 10.0.0.1 ping statistics --- 00:11:57.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.424 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:57.424 10:57:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.424 10:57:54 -- nvmf/common.sh@411 -- # return 0 00:11:57.424 10:57:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:57.424 10:57:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.424 10:57:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:57.424 10:57:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:57.424 10:57:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.424 10:57:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:57.424 10:57:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:57.685 10:57:54 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:57.685 10:57:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:57.685 10:57:54 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:57.685 10:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:57.685 10:57:54 -- nvmf/common.sh@470 -- # nvmfpid=252137 00:11:57.685 10:57:54 -- nvmf/common.sh@471 -- # waitforlisten 252137 00:11:57.685 10:57:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.685 10:57:54 -- common/autotest_common.sh@827 -- # '[' -z 252137 ']' 00:11:57.685 10:57:54 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.685 10:57:54 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:57.685 10:57:54 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.685 10:57:54 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:57.685 10:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:57.685 [2024-05-15 10:57:54.139042] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:11:57.685 [2024-05-15 10:57:54.139103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.685 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.685 [2024-05-15 10:57:54.210770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.685 [2024-05-15 10:57:54.285856] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.685 [2024-05-15 10:57:54.285897] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.685 [2024-05-15 10:57:54.285905] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.685 [2024-05-15 10:57:54.285912] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.685 [2024-05-15 10:57:54.285918] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.685 [2024-05-15 10:57:54.286052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.685 [2024-05-15 10:57:54.286166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.685 [2024-05-15 10:57:54.286293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.685 [2024-05-15 10:57:54.286295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.627 10:57:54 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:58.627 10:57:54 -- common/autotest_common.sh@860 -- # return 0 00:11:58.627 10:57:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:58.627 10:57:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.627 10:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 10:57:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.628 10:57:54 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.628 10:57:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 [2024-05-15 10:57:54.972170] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.628 10:57:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:54 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.628 10:57:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 Malloc0 00:11:58.628 10:57:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:55 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:58.628 10:57:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 Malloc1 00:11:58.628 10:57:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:55 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:58.628 10:57:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 10:57:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:55 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.628 10:57:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 10:57:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:55 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.628 10:57:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 10:57:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:55 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.628 10:57:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 [2024-05-15 10:57:55.062018] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:58.628 [2024-05-15 10:57:55.062255] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.628 10:57:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:55 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.628 10:57:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.628 10:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:58.628 10:57:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.628 10:57:55 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:58.628 00:11:58.628 Discovery Log Number of Records 2, Generation counter 2 00:11:58.628 =====Discovery Log Entry 0====== 00:11:58.628 trtype: tcp 00:11:58.628 adrfam: ipv4 00:11:58.628 subtype: current discovery subsystem 00:11:58.628 treq: not required 00:11:58.628 portid: 0 00:11:58.628 trsvcid: 4420 00:11:58.628 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.628 traddr: 10.0.0.2 00:11:58.628 eflags: explicit discovery connections, duplicate discovery information 00:11:58.628 sectype: none 00:11:58.628 =====Discovery Log Entry 1====== 00:11:58.628 trtype: tcp 00:11:58.628 adrfam: ipv4 00:11:58.628 subtype: nvme subsystem 00:11:58.628 treq: not required 00:11:58.628 portid: 0 00:11:58.628 trsvcid: 4420 00:11:58.628 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:58.628 traddr: 10.0.0.2 00:11:58.628 eflags: none 00:11:58.628 sectype: none 00:11:58.628 10:57:55 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:58.628 10:57:55 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:58.628 10:57:55 -- nvmf/common.sh@511 -- # local dev _ 00:11:58.628 10:57:55 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:58.628 10:57:55 -- nvmf/common.sh@510 -- # nvme list 00:11:58.628 10:57:55 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:58.628 10:57:55 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:58.628 10:57:55 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:58.628 10:57:55 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:58.628 10:57:55 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:58.628 10:57:55 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.541 10:57:56 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:00.541 10:57:56 -- common/autotest_common.sh@1194 -- # local i=0 00:12:00.541 10:57:56 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.541 10:57:56 -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:12:00.541 10:57:56 -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:12:00.541 10:57:56 -- common/autotest_common.sh@1201 -- # sleep 2 00:12:02.455 10:57:58 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:02.455 10:57:58 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:02.455 10:57:58 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.455 10:57:58 -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:12:02.455 10:57:58 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.455 10:57:58 -- common/autotest_common.sh@1204 -- # return 0 00:12:02.455 10:57:58 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:02.455 10:57:58 -- nvmf/common.sh@511 -- # local dev _ 00:12:02.455 10:57:58 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.455 10:57:58 -- nvmf/common.sh@510 -- # nvme list 00:12:02.455 10:57:58 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:02.455 10:57:58 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.455 10:57:58 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.455 10:57:58 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.455 10:57:58 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:02.455 10:57:58 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:02.455 10:57:58 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.455 10:57:58 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:02.455 10:57:58 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:02.455 10:57:58 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.455 10:57:58 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:02.455 /dev/nvme0n1 ]] 00:12:02.455 10:57:58 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:02.455 10:57:58 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:02.455 10:57:58 -- nvmf/common.sh@511 -- # local dev _ 00:12:02.455 10:57:58 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.455 10:57:58 -- nvmf/common.sh@510 -- # nvme list 00:12:02.455 10:57:59 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:02.456 10:57:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.456 10:57:59 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.456 10:57:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.456 10:57:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:02.456 10:57:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:02.456 10:57:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.456 10:57:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:02.456 10:57:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:02.456 10:57:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.456 10:57:59 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:02.456 10:57:59 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.716 10:57:59 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.716 10:57:59 -- common/autotest_common.sh@1215 -- # local i=0 00:12:02.716 10:57:59 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:02.716 10:57:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.716 10:57:59 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:02.716 10:57:59 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.716 10:57:59 -- common/autotest_common.sh@1227 -- # return 0 00:12:02.716 10:57:59 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:02.716 10:57:59 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.716 10:57:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.716 10:57:59 -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 10:57:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.716 10:57:59 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:02.716 10:57:59 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:02.716 10:57:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:02.716 10:57:59 -- nvmf/common.sh@117 -- # sync 00:12:02.716 10:57:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.716 10:57:59 -- nvmf/common.sh@120 -- # set +e 00:12:02.716 10:57:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.716 10:57:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.977 rmmod nvme_tcp 00:12:02.977 rmmod nvme_fabrics 00:12:02.977 rmmod nvme_keyring 00:12:02.977 10:57:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.977 10:57:59 -- nvmf/common.sh@124 -- # set -e 00:12:02.977 10:57:59 -- nvmf/common.sh@125 -- # return 0 00:12:02.977 10:57:59 -- nvmf/common.sh@478 -- # '[' -n 252137 ']' 00:12:02.977 10:57:59 -- nvmf/common.sh@479 -- # killprocess 252137 00:12:02.977 10:57:59 -- common/autotest_common.sh@946 -- # '[' -z 252137 ']' 00:12:02.977 10:57:59 -- common/autotest_common.sh@950 -- # kill -0 252137 00:12:02.977 10:57:59 -- common/autotest_common.sh@951 -- # uname 00:12:02.978 10:57:59 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:02.978 10:57:59 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 252137 00:12:02.978 10:57:59 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:02.978 10:57:59 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:02.978 10:57:59 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 252137' 00:12:02.978 killing process with pid 252137 00:12:02.978 10:57:59 -- common/autotest_common.sh@965 -- # kill 252137 00:12:02.978 [2024-05-15 10:57:59.482997] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:02.978 10:57:59 -- common/autotest_common.sh@970 -- # wait 252137 00:12:03.238 10:57:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:03.238 10:57:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:03.238 10:57:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:03.238 10:57:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.238 10:57:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.238 10:57:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.238 10:57:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.238 10:57:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.150 10:58:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:05.150 00:12:05.150 real 0m14.759s 00:12:05.150 user 0m23.424s 00:12:05.150 sys 0m5.738s 00:12:05.150 10:58:01 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:05.150 10:58:01 -- common/autotest_common.sh@10 -- # set +x 00:12:05.150 ************************************ 00:12:05.150 END TEST nvmf_nvme_cli 00:12:05.150 ************************************ 00:12:05.150 10:58:01 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:05.150 10:58:01 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:05.150 10:58:01 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:05.150 10:58:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:05.150 10:58:01 -- common/autotest_common.sh@10 -- # set +x 00:12:05.150 ************************************ 00:12:05.150 START TEST nvmf_vfio_user 00:12:05.150 ************************************ 00:12:05.150 10:58:01 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:05.410 * Looking for test storage... 00:12:05.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.410 10:58:01 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.410 10:58:01 -- nvmf/common.sh@7 -- # uname -s 00:12:05.410 10:58:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.410 10:58:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.410 10:58:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.410 10:58:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.410 10:58:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.410 10:58:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.410 10:58:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.410 10:58:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.410 10:58:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.410 10:58:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.411 10:58:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:05.411 10:58:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:05.411 10:58:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.411 10:58:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.411 10:58:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.411 10:58:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.411 10:58:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.411 10:58:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.411 10:58:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.411 10:58:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.411 10:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.411 10:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.411 10:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.411 10:58:01 -- paths/export.sh@5 -- # export PATH 00:12:05.411 10:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.411 10:58:01 -- nvmf/common.sh@47 -- # : 0 00:12:05.411 10:58:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.411 10:58:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.411 10:58:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.411 10:58:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.411 10:58:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.411 10:58:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.411 10:58:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.411 10:58:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=253762 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 253762' 00:12:05.411 Process pid: 253762 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 253762 00:12:05.411 10:58:01 -- common/autotest_common.sh@827 -- # '[' -z 253762 ']' 00:12:05.411 10:58:01 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:05.411 10:58:01 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.411 10:58:01 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:05.411 10:58:01 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.411 10:58:01 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:05.411 10:58:01 -- common/autotest_common.sh@10 -- # set +x 00:12:05.411 [2024-05-15 10:58:01.986171] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:12:05.411 [2024-05-15 10:58:01.986236] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.411 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.411 [2024-05-15 10:58:02.051164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.671 [2024-05-15 10:58:02.125028] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.671 [2024-05-15 10:58:02.125064] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.671 [2024-05-15 10:58:02.125072] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.671 [2024-05-15 10:58:02.125078] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.671 [2024-05-15 10:58:02.125084] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.671 [2024-05-15 10:58:02.125219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.671 [2024-05-15 10:58:02.125338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.671 [2024-05-15 10:58:02.125494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.671 [2024-05-15 10:58:02.125495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.243 10:58:02 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:06.243 10:58:02 -- common/autotest_common.sh@860 -- # return 0 00:12:06.243 10:58:02 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:07.185 10:58:03 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:07.444 10:58:03 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:07.444 10:58:03 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:07.444 10:58:03 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.444 10:58:03 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:07.444 10:58:03 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:07.703 Malloc1 00:12:07.704 10:58:04 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:07.704 10:58:04 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:07.963 10:58:04 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:07.963 [2024-05-15 10:58:04.613524] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:08.223 10:58:04 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:08.223 10:58:04 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:08.223 10:58:04 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:08.223 Malloc2 00:12:08.223 10:58:04 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:08.484 10:58:04 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:08.744 10:58:05 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:08.744 10:58:05 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:08.744 10:58:05 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:08.744 10:58:05 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:08.744 10:58:05 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:08.744 10:58:05 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:08.744 10:58:05 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:08.744 [2024-05-15 10:58:05.350236] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:12:08.744 [2024-05-15 10:58:05.350277] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254458 ] 00:12:08.744 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.745 [2024-05-15 10:58:05.381158] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:08.745 [2024-05-15 10:58:05.389902] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:08.745 [2024-05-15 10:58:05.389921] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7a1534c000 00:12:08.745 [2024-05-15 10:58:05.390901] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.745 [2024-05-15 10:58:05.391905] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.745 [2024-05-15 10:58:05.392912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.745 [2024-05-15 10:58:05.393910] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.745 [2024-05-15 10:58:05.394930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.745 [2024-05-15 10:58:05.395932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.745 [2024-05-15 10:58:05.396943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.745 [2024-05-15 10:58:05.397943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:09.009 [2024-05-15 10:58:05.398959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:09.009 [2024-05-15 10:58:05.398972] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7a15341000 00:12:09.009 [2024-05-15 10:58:05.400300] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:09.009 [2024-05-15 10:58:05.420712] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:09.009 [2024-05-15 10:58:05.420736] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:09.009 [2024-05-15 10:58:05.423101] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:09.009 [2024-05-15 10:58:05.423148] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:09.009 [2024-05-15 10:58:05.423229] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:09.009 [2024-05-15 10:58:05.423246] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:09.009 [2024-05-15 10:58:05.423252] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:09.009 [2024-05-15 10:58:05.424095] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:09.009 [2024-05-15 10:58:05.424104] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:09.009 [2024-05-15 10:58:05.424111] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:09.009 [2024-05-15 10:58:05.425107] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:09.009 [2024-05-15 10:58:05.425115] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:09.009 [2024-05-15 10:58:05.425122] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:09.009 [2024-05-15 10:58:05.426113] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:09.009 [2024-05-15 10:58:05.426121] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:09.009 [2024-05-15 10:58:05.427118] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:09.009 [2024-05-15 10:58:05.427126] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:09.009 [2024-05-15 10:58:05.427131] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:09.009 [2024-05-15 10:58:05.427137] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:09.009 [2024-05-15 10:58:05.427242] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:09.009 [2024-05-15 10:58:05.427247] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:09.009 [2024-05-15 10:58:05.427252] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:09.009 [2024-05-15 10:58:05.428125] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:09.009 [2024-05-15 10:58:05.429125] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:09.009 [2024-05-15 10:58:05.430134] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:09.009 [2024-05-15 10:58:05.431131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:09.009 [2024-05-15 10:58:05.431181] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:09.009 [2024-05-15 10:58:05.432141] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:09.009 [2024-05-15 10:58:05.432149] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:09.009 [2024-05-15 10:58:05.432154] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:09.009 [2024-05-15 10:58:05.432175] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:09.009 [2024-05-15 10:58:05.432182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:09.009 [2024-05-15 10:58:05.432195] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:09.009 [2024-05-15 10:58:05.432199] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:09.009 [2024-05-15 10:58:05.432212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:09.009 [2024-05-15 10:58:05.432245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:09.009 [2024-05-15 10:58:05.432253] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:09.010 [2024-05-15 10:58:05.432259] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:09.010 [2024-05-15 10:58:05.432263] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:09.010 [2024-05-15 10:58:05.432268] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:09.010 [2024-05-15 10:58:05.432273] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:09.010 [2024-05-15 10:58:05.432278] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:09.010 [2024-05-15 10:58:05.432282] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432289] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.010 [2024-05-15 10:58:05.432328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.010 [2024-05-15 10:58:05.432336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.010 [2024-05-15 10:58:05.432344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.010 [2024-05-15 10:58:05.432351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432388] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:09.010 [2024-05-15 10:58:05.432393] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432401] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432407] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432473] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432481] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432489] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:09.010 [2024-05-15 10:58:05.432493] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:09.010 [2024-05-15 10:58:05.432499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432524] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:09.010 [2024-05-15 10:58:05.432533] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432541] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432552] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:09.010 [2024-05-15 10:58:05.432557] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:09.010 [2024-05-15 10:58:05.432563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432595] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432604] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:09.010 [2024-05-15 10:58:05.432608] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:09.010 [2024-05-15 10:58:05.432614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432631] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432638] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432645] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432650] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432655] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432660] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:09.010 [2024-05-15 10:58:05.432664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:09.010 [2024-05-15 10:58:05.432669] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:09.010 [2024-05-15 10:58:05.432688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432768] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:09.010 [2024-05-15 10:58:05.432772] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:09.010 [2024-05-15 10:58:05.432776] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:09.010 [2024-05-15 10:58:05.432779] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:09.010 [2024-05-15 10:58:05.432786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:09.010 [2024-05-15 10:58:05.432793] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:09.010 [2024-05-15 10:58:05.432799] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:09.010 [2024-05-15 10:58:05.432805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432812] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:09.010 [2024-05-15 10:58:05.432816] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:09.010 [2024-05-15 10:58:05.432822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432829] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:09.010 [2024-05-15 10:58:05.432833] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:09.010 [2024-05-15 10:58:05.432839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:09.010 [2024-05-15 10:58:05.432846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:09.010 [2024-05-15 10:58:05.432876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:09.010 ===================================================== 00:12:09.010 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:09.010 ===================================================== 00:12:09.010 Controller Capabilities/Features 00:12:09.010 ================================ 00:12:09.010 Vendor ID: 4e58 00:12:09.010 Subsystem Vendor ID: 4e58 00:12:09.010 Serial Number: SPDK1 00:12:09.010 Model Number: SPDK bdev Controller 00:12:09.010 Firmware Version: 24.05 00:12:09.010 Recommended Arb Burst: 6 00:12:09.010 IEEE OUI Identifier: 8d 6b 50 00:12:09.010 Multi-path I/O 00:12:09.011 May have multiple subsystem ports: Yes 00:12:09.011 May have multiple controllers: Yes 00:12:09.011 Associated with SR-IOV VF: No 00:12:09.011 Max Data Transfer Size: 131072 00:12:09.011 Max Number of Namespaces: 32 00:12:09.011 Max Number of I/O Queues: 127 00:12:09.011 NVMe Specification Version (VS): 1.3 00:12:09.011 NVMe Specification Version (Identify): 1.3 00:12:09.011 Maximum Queue Entries: 256 00:12:09.011 Contiguous Queues Required: Yes 00:12:09.011 Arbitration Mechanisms Supported 00:12:09.011 Weighted Round Robin: Not Supported 00:12:09.011 Vendor Specific: Not Supported 00:12:09.011 Reset Timeout: 15000 ms 00:12:09.011 Doorbell Stride: 4 bytes 00:12:09.011 NVM Subsystem Reset: Not Supported 00:12:09.011 Command Sets Supported 00:12:09.011 NVM Command Set: Supported 00:12:09.011 Boot Partition: Not Supported 00:12:09.011 Memory Page Size Minimum: 4096 bytes 00:12:09.011 Memory Page Size Maximum: 4096 bytes 00:12:09.011 Persistent Memory Region: Not Supported 00:12:09.011 Optional Asynchronous Events Supported 00:12:09.011 Namespace Attribute Notices: Supported 00:12:09.011 Firmware Activation Notices: Not Supported 00:12:09.011 ANA Change Notices: Not Supported 00:12:09.011 PLE Aggregate Log Change Notices: Not Supported 00:12:09.011 LBA Status Info Alert Notices: Not Supported 00:12:09.011 EGE Aggregate Log Change Notices: Not Supported 00:12:09.011 Normal NVM Subsystem Shutdown event: Not Supported 00:12:09.011 Zone Descriptor Change Notices: Not Supported 00:12:09.011 Discovery Log Change Notices: Not Supported 00:12:09.011 Controller Attributes 00:12:09.011 128-bit Host Identifier: Supported 00:12:09.011 Non-Operational Permissive Mode: Not Supported 00:12:09.011 NVM Sets: Not Supported 00:12:09.011 Read Recovery Levels: Not Supported 00:12:09.011 Endurance Groups: Not Supported 00:12:09.011 Predictable Latency Mode: Not Supported 00:12:09.011 Traffic Based Keep ALive: Not Supported 00:12:09.011 Namespace Granularity: Not Supported 00:12:09.011 SQ Associations: Not Supported 00:12:09.011 UUID List: Not Supported 00:12:09.011 Multi-Domain Subsystem: Not Supported 00:12:09.011 Fixed Capacity Management: Not Supported 00:12:09.011 Variable Capacity Management: Not Supported 00:12:09.011 Delete Endurance Group: Not Supported 00:12:09.011 Delete NVM Set: Not Supported 00:12:09.011 Extended LBA Formats Supported: Not Supported 00:12:09.011 Flexible Data Placement Supported: Not Supported 00:12:09.011 00:12:09.011 Controller Memory Buffer Support 00:12:09.011 ================================ 00:12:09.011 Supported: No 00:12:09.011 00:12:09.011 Persistent Memory Region Support 00:12:09.011 ================================ 00:12:09.011 Supported: No 00:12:09.011 00:12:09.011 Admin Command Set Attributes 00:12:09.011 ============================ 00:12:09.011 Security Send/Receive: Not Supported 00:12:09.011 Format NVM: Not Supported 00:12:09.011 Firmware Activate/Download: Not Supported 00:12:09.011 Namespace Management: Not Supported 00:12:09.011 Device Self-Test: Not Supported 00:12:09.011 Directives: Not Supported 00:12:09.011 NVMe-MI: Not Supported 00:12:09.011 Virtualization Management: Not Supported 00:12:09.011 Doorbell Buffer Config: Not Supported 00:12:09.011 Get LBA Status Capability: Not Supported 00:12:09.011 Command & Feature Lockdown Capability: Not Supported 00:12:09.011 Abort Command Limit: 4 00:12:09.011 Async Event Request Limit: 4 00:12:09.011 Number of Firmware Slots: N/A 00:12:09.011 Firmware Slot 1 Read-Only: N/A 00:12:09.011 Firmware Activation Without Reset: N/A 00:12:09.011 Multiple Update Detection Support: N/A 00:12:09.011 Firmware Update Granularity: No Information Provided 00:12:09.011 Per-Namespace SMART Log: No 00:12:09.011 Asymmetric Namespace Access Log Page: Not Supported 00:12:09.011 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:09.011 Command Effects Log Page: Supported 00:12:09.011 Get Log Page Extended Data: Supported 00:12:09.011 Telemetry Log Pages: Not Supported 00:12:09.011 Persistent Event Log Pages: Not Supported 00:12:09.011 Supported Log Pages Log Page: May Support 00:12:09.011 Commands Supported & Effects Log Page: Not Supported 00:12:09.011 Feature Identifiers & Effects Log Page:May Support 00:12:09.011 NVMe-MI Commands & Effects Log Page: May Support 00:12:09.011 Data Area 4 for Telemetry Log: Not Supported 00:12:09.011 Error Log Page Entries Supported: 128 00:12:09.011 Keep Alive: Supported 00:12:09.011 Keep Alive Granularity: 10000 ms 00:12:09.011 00:12:09.011 NVM Command Set Attributes 00:12:09.011 ========================== 00:12:09.011 Submission Queue Entry Size 00:12:09.011 Max: 64 00:12:09.011 Min: 64 00:12:09.011 Completion Queue Entry Size 00:12:09.011 Max: 16 00:12:09.011 Min: 16 00:12:09.011 Number of Namespaces: 32 00:12:09.011 Compare Command: Supported 00:12:09.011 Write Uncorrectable Command: Not Supported 00:12:09.011 Dataset Management Command: Supported 00:12:09.011 Write Zeroes Command: Supported 00:12:09.011 Set Features Save Field: Not Supported 00:12:09.011 Reservations: Not Supported 00:12:09.011 Timestamp: Not Supported 00:12:09.011 Copy: Supported 00:12:09.011 Volatile Write Cache: Present 00:12:09.011 Atomic Write Unit (Normal): 1 00:12:09.011 Atomic Write Unit (PFail): 1 00:12:09.011 Atomic Compare & Write Unit: 1 00:12:09.011 Fused Compare & Write: Supported 00:12:09.011 Scatter-Gather List 00:12:09.011 SGL Command Set: Supported (Dword aligned) 00:12:09.011 SGL Keyed: Not Supported 00:12:09.011 SGL Bit Bucket Descriptor: Not Supported 00:12:09.011 SGL Metadata Pointer: Not Supported 00:12:09.011 Oversized SGL: Not Supported 00:12:09.011 SGL Metadata Address: Not Supported 00:12:09.011 SGL Offset: Not Supported 00:12:09.011 Transport SGL Data Block: Not Supported 00:12:09.011 Replay Protected Memory Block: Not Supported 00:12:09.011 00:12:09.011 Firmware Slot Information 00:12:09.011 ========================= 00:12:09.011 Active slot: 1 00:12:09.011 Slot 1 Firmware Revision: 24.05 00:12:09.011 00:12:09.011 00:12:09.011 Commands Supported and Effects 00:12:09.011 ============================== 00:12:09.011 Admin Commands 00:12:09.011 -------------- 00:12:09.011 Get Log Page (02h): Supported 00:12:09.011 Identify (06h): Supported 00:12:09.011 Abort (08h): Supported 00:12:09.011 Set Features (09h): Supported 00:12:09.011 Get Features (0Ah): Supported 00:12:09.011 Asynchronous Event Request (0Ch): Supported 00:12:09.011 Keep Alive (18h): Supported 00:12:09.011 I/O Commands 00:12:09.011 ------------ 00:12:09.011 Flush (00h): Supported LBA-Change 00:12:09.011 Write (01h): Supported LBA-Change 00:12:09.011 Read (02h): Supported 00:12:09.011 Compare (05h): Supported 00:12:09.011 Write Zeroes (08h): Supported LBA-Change 00:12:09.011 Dataset Management (09h): Supported LBA-Change 00:12:09.011 Copy (19h): Supported LBA-Change 00:12:09.011 Unknown (79h): Supported LBA-Change 00:12:09.011 Unknown (7Ah): Supported 00:12:09.011 00:12:09.011 Error Log 00:12:09.011 ========= 00:12:09.011 00:12:09.011 Arbitration 00:12:09.011 =========== 00:12:09.011 Arbitration Burst: 1 00:12:09.011 00:12:09.011 Power Management 00:12:09.011 ================ 00:12:09.011 Number of Power States: 1 00:12:09.011 Current Power State: Power State #0 00:12:09.011 Power State #0: 00:12:09.011 Max Power: 0.00 W 00:12:09.011 Non-Operational State: Operational 00:12:09.011 Entry Latency: Not Reported 00:12:09.011 Exit Latency: Not Reported 00:12:09.011 Relative Read Throughput: 0 00:12:09.011 Relative Read Latency: 0 00:12:09.011 Relative Write Throughput: 0 00:12:09.011 Relative Write Latency: 0 00:12:09.011 Idle Power: Not Reported 00:12:09.011 Active Power: Not Reported 00:12:09.011 Non-Operational Permissive Mode: Not Supported 00:12:09.011 00:12:09.011 Health Information 00:12:09.011 ================== 00:12:09.011 Critical Warnings: 00:12:09.011 Available Spare Space: OK 00:12:09.011 Temperature: OK 00:12:09.011 Device Reliability: OK 00:12:09.011 Read Only: No 00:12:09.011 Volatile Memory Backup: OK 00:12:09.011 Current Temperature: 0 Kelvin (-2[2024-05-15 10:58:05.432975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:09.011 [2024-05-15 10:58:05.432983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:09.011 [2024-05-15 10:58:05.433010] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:09.011 [2024-05-15 10:58:05.433019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.011 [2024-05-15 10:58:05.433025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.012 [2024-05-15 10:58:05.433032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.012 [2024-05-15 10:58:05.433038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.012 [2024-05-15 10:58:05.435553] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:09.012 [2024-05-15 10:58:05.435563] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:09.012 [2024-05-15 10:58:05.436155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:09.012 [2024-05-15 10:58:05.436194] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:09.012 [2024-05-15 10:58:05.436199] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:09.012 [2024-05-15 10:58:05.437167] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:09.012 [2024-05-15 10:58:05.437178] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:09.012 [2024-05-15 10:58:05.437236] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:09.012 [2024-05-15 10:58:05.439193] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:09.012 73 Celsius) 00:12:09.012 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:09.012 Available Spare: 0% 00:12:09.012 Available Spare Threshold: 0% 00:12:09.012 Life Percentage Used: 0% 00:12:09.012 Data Units Read: 0 00:12:09.012 Data Units Written: 0 00:12:09.012 Host Read Commands: 0 00:12:09.012 Host Write Commands: 0 00:12:09.012 Controller Busy Time: 0 minutes 00:12:09.012 Power Cycles: 0 00:12:09.012 Power On Hours: 0 hours 00:12:09.012 Unsafe Shutdowns: 0 00:12:09.012 Unrecoverable Media Errors: 0 00:12:09.012 Lifetime Error Log Entries: 0 00:12:09.012 Warning Temperature Time: 0 minutes 00:12:09.012 Critical Temperature Time: 0 minutes 00:12:09.012 00:12:09.012 Number of Queues 00:12:09.012 ================ 00:12:09.012 Number of I/O Submission Queues: 127 00:12:09.012 Number of I/O Completion Queues: 127 00:12:09.012 00:12:09.012 Active Namespaces 00:12:09.012 ================= 00:12:09.012 Namespace ID:1 00:12:09.012 Error Recovery Timeout: Unlimited 00:12:09.012 Command Set Identifier: NVM (00h) 00:12:09.012 Deallocate: Supported 00:12:09.012 Deallocated/Unwritten Error: Not Supported 00:12:09.012 Deallocated Read Value: Unknown 00:12:09.012 Deallocate in Write Zeroes: Not Supported 00:12:09.012 Deallocated Guard Field: 0xFFFF 00:12:09.012 Flush: Supported 00:12:09.012 Reservation: Supported 00:12:09.012 Namespace Sharing Capabilities: Multiple Controllers 00:12:09.012 Size (in LBAs): 131072 (0GiB) 00:12:09.012 Capacity (in LBAs): 131072 (0GiB) 00:12:09.012 Utilization (in LBAs): 131072 (0GiB) 00:12:09.012 NGUID: 4B83DFFDD27848D6B9FFB908545A33F4 00:12:09.012 UUID: 4b83dffd-d278-48d6-b9ff-b908545a33f4 00:12:09.012 Thin Provisioning: Not Supported 00:12:09.012 Per-NS Atomic Units: Yes 00:12:09.012 Atomic Boundary Size (Normal): 0 00:12:09.012 Atomic Boundary Size (PFail): 0 00:12:09.012 Atomic Boundary Offset: 0 00:12:09.012 Maximum Single Source Range Length: 65535 00:12:09.012 Maximum Copy Length: 65535 00:12:09.012 Maximum Source Range Count: 1 00:12:09.012 NGUID/EUI64 Never Reused: No 00:12:09.012 Namespace Write Protected: No 00:12:09.012 Number of LBA Formats: 1 00:12:09.012 Current LBA Format: LBA Format #00 00:12:09.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:09.012 00:12:09.012 10:58:05 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:09.012 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.012 [2024-05-15 10:58:05.623215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:14.307 [2024-05-15 10:58:10.640939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:14.307 Initializing NVMe Controllers 00:12:14.307 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:14.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:14.307 Initialization complete. Launching workers. 00:12:14.307 ======================================================== 00:12:14.307 Latency(us) 00:12:14.307 Device Information : IOPS MiB/s Average min max 00:12:14.307 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40290.61 157.39 3176.30 819.31 6843.45 00:12:14.307 ======================================================== 00:12:14.307 Total : 40290.61 157.39 3176.30 819.31 6843.45 00:12:14.307 00:12:14.307 10:58:10 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:14.307 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.307 [2024-05-15 10:58:10.815809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.595 [2024-05-15 10:58:15.850887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.595 Initializing NVMe Controllers 00:12:19.595 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:19.595 Initialization complete. Launching workers. 00:12:19.595 ======================================================== 00:12:19.595 Latency(us) 00:12:19.595 Device Information : IOPS MiB/s Average min max 00:12:19.595 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16047.22 62.68 7975.95 5983.88 9977.85 00:12:19.595 ======================================================== 00:12:19.595 Total : 16047.22 62.68 7975.95 5983.88 9977.85 00:12:19.595 00:12:19.595 10:58:15 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:19.595 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.595 [2024-05-15 10:58:16.039809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.885 [2024-05-15 10:58:21.094712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.885 Initializing NVMe Controllers 00:12:24.885 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.885 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:24.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:24.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:24.885 Initialization complete. Launching workers. 00:12:24.885 Starting thread on core 2 00:12:24.885 Starting thread on core 3 00:12:24.885 Starting thread on core 1 00:12:24.885 10:58:21 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:24.885 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.885 [2024-05-15 10:58:21.354979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.186 [2024-05-15 10:58:24.423698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.186 Initializing NVMe Controllers 00:12:28.186 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.186 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.186 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:28.186 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:28.186 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:28.186 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:28.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:28.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:28.186 Initialization complete. Launching workers. 00:12:28.186 Starting thread on core 1 with urgent priority queue 00:12:28.186 Starting thread on core 2 with urgent priority queue 00:12:28.186 Starting thread on core 3 with urgent priority queue 00:12:28.186 Starting thread on core 0 with urgent priority queue 00:12:28.186 SPDK bdev Controller (SPDK1 ) core 0: 9431.00 IO/s 10.60 secs/100000 ios 00:12:28.186 SPDK bdev Controller (SPDK1 ) core 1: 8144.67 IO/s 12.28 secs/100000 ios 00:12:28.186 SPDK bdev Controller (SPDK1 ) core 2: 10756.33 IO/s 9.30 secs/100000 ios 00:12:28.186 SPDK bdev Controller (SPDK1 ) core 3: 8110.00 IO/s 12.33 secs/100000 ios 00:12:28.186 ======================================================== 00:12:28.186 00:12:28.186 10:58:24 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:28.186 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.186 [2024-05-15 10:58:24.686069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.186 [2024-05-15 10:58:24.720256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.186 Initializing NVMe Controllers 00:12:28.186 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.186 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.186 Namespace ID: 1 size: 0GB 00:12:28.186 Initialization complete. 00:12:28.186 INFO: using host memory buffer for IO 00:12:28.186 Hello world! 00:12:28.186 10:58:24 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:28.186 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.447 [2024-05-15 10:58:24.979024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.390 Initializing NVMe Controllers 00:12:29.390 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.390 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.390 Initialization complete. Launching workers. 00:12:29.390 submit (in ns) avg, min, max = 7083.0, 3900.8, 4000441.7 00:12:29.390 complete (in ns) avg, min, max = 16817.0, 2365.8, 4000882.5 00:12:29.390 00:12:29.390 Submit histogram 00:12:29.390 ================ 00:12:29.390 Range in us Cumulative Count 00:12:29.390 3.893 - 3.920: 1.1724% ( 232) 00:12:29.390 3.920 - 3.947: 7.0346% ( 1160) 00:12:29.390 3.947 - 3.973: 17.3792% ( 2047) 00:12:29.390 3.973 - 4.000: 28.7194% ( 2244) 00:12:29.390 4.000 - 4.027: 40.2062% ( 2273) 00:12:29.390 4.027 - 4.053: 50.6367% ( 2064) 00:12:29.390 4.053 - 4.080: 67.0053% ( 3239) 00:12:29.390 4.080 - 4.107: 80.9278% ( 2755) 00:12:29.390 4.107 - 4.133: 91.8941% ( 2170) 00:12:29.390 4.133 - 4.160: 97.2104% ( 1052) 00:12:29.390 4.160 - 4.187: 98.8225% ( 319) 00:12:29.390 4.187 - 4.213: 99.3380% ( 102) 00:12:29.390 4.213 - 4.240: 99.5048% ( 33) 00:12:29.390 4.240 - 4.267: 99.5502% ( 9) 00:12:29.390 4.507 - 4.533: 99.5553% ( 1) 00:12:29.390 4.720 - 4.747: 99.5603% ( 1) 00:12:29.390 4.773 - 4.800: 99.5654% ( 1) 00:12:29.390 4.933 - 4.960: 99.5704% ( 1) 00:12:29.390 5.227 - 5.253: 99.5755% ( 1) 00:12:29.390 5.307 - 5.333: 99.5806% ( 1) 00:12:29.390 5.573 - 5.600: 99.5856% ( 1) 00:12:29.390 5.653 - 5.680: 99.5907% ( 1) 00:12:29.390 5.707 - 5.733: 99.5957% ( 1) 00:12:29.390 5.733 - 5.760: 99.6008% ( 1) 00:12:29.390 5.920 - 5.947: 99.6058% ( 1) 00:12:29.390 5.947 - 5.973: 99.6109% ( 1) 00:12:29.390 6.027 - 6.053: 99.6159% ( 1) 00:12:29.390 6.053 - 6.080: 99.6210% ( 1) 00:12:29.390 6.080 - 6.107: 99.6311% ( 2) 00:12:29.390 6.133 - 6.160: 99.6361% ( 1) 00:12:29.390 6.160 - 6.187: 99.6412% ( 1) 00:12:29.390 6.187 - 6.213: 99.6513% ( 2) 00:12:29.390 6.267 - 6.293: 99.6564% ( 1) 00:12:29.390 6.293 - 6.320: 99.6614% ( 1) 00:12:29.390 6.320 - 6.347: 99.6867% ( 5) 00:12:29.390 6.347 - 6.373: 99.6917% ( 1) 00:12:29.390 6.373 - 6.400: 99.6968% ( 1) 00:12:29.390 6.427 - 6.453: 99.7119% ( 3) 00:12:29.390 6.560 - 6.587: 99.7271% ( 3) 00:12:29.390 6.587 - 6.613: 99.7423% ( 3) 00:12:29.390 6.613 - 6.640: 99.7473% ( 1) 00:12:29.390 6.640 - 6.667: 99.7524% ( 1) 00:12:29.390 6.720 - 6.747: 99.7574% ( 1) 00:12:29.390 6.773 - 6.800: 99.7625% ( 1) 00:12:29.390 6.800 - 6.827: 99.7726% ( 2) 00:12:29.390 6.880 - 6.933: 99.7776% ( 1) 00:12:29.390 7.040 - 7.093: 99.7827% ( 1) 00:12:29.390 7.147 - 7.200: 99.7878% ( 1) 00:12:29.390 7.253 - 7.307: 99.7928% ( 1) 00:12:29.390 7.307 - 7.360: 99.7979% ( 1) 00:12:29.390 7.360 - 7.413: 99.8181% ( 4) 00:12:29.390 7.413 - 7.467: 99.8231% ( 1) 00:12:29.390 7.467 - 7.520: 99.8332% ( 2) 00:12:29.390 7.520 - 7.573: 99.8383% ( 1) 00:12:29.390 7.573 - 7.627: 99.8433% ( 1) 00:12:29.390 7.627 - 7.680: 99.8534% ( 2) 00:12:29.390 7.680 - 7.733: 99.8585% ( 1) 00:12:29.390 7.787 - 7.840: 99.8636% ( 1) 00:12:29.390 7.840 - 7.893: 99.8737% ( 2) 00:12:29.390 7.947 - 8.000: 99.8838% ( 2) 00:12:29.390 8.000 - 8.053: 99.8888% ( 1) 00:12:29.390 8.053 - 8.107: 99.8989% ( 2) 00:12:29.390 8.107 - 8.160: 99.9040% ( 1) 00:12:29.390 8.373 - 8.427: 99.9090% ( 1) 00:12:29.390 9.333 - 9.387: 99.9141% ( 1) 00:12:29.390 9.920 - 9.973: 99.9191% ( 1) 00:12:29.390 11.787 - 11.840: 99.9242% ( 1) 00:12:29.390 3986.773 - 4014.080: 100.0000% ( 15) 00:12:29.390 00:12:29.390 Complete histogram 00:12:29.390 ================== 00:12:29.390 Range in us Cumulative Count 00:12:29.390 2.360 - 2.373: 0.0051% ( 1) 00:12:29.390 2.373 - 2.387: 0.0657% ( 12) 00:12:29.390 2.387 - 2.400: 1.0663% ( 198) 00:12:29.390 2.400 - 2.413: 1.1623% ( 19) 00:12:29.390 2.413 - 2.427: 1.2786% ( 23) 00:12:29.390 2.427 - 2.440: 1.8142% ( 106) 00:12:29.391 2.440 - 2.453: 63.0483% ( 12117) 00:12:29.391 2.453 - 2.467: 68.4607% ( 1071) 00:12:29.391 2.467 - 2.480: 78.1181% ( 1911) 00:12:29.391 2.480 - [2024-05-15 10:58:25.998699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.652 2.493: 81.3069% ( 631) 00:12:29.652 2.493 - 2.507: 82.2872% ( 194) 00:12:29.652 2.507 - 2.520: 87.3661% ( 1005) 00:12:29.652 2.520 - 2.533: 94.1227% ( 1337) 00:12:29.652 2.533 - 2.547: 97.0184% ( 573) 00:12:29.652 2.547 - 2.560: 98.2969% ( 253) 00:12:29.652 2.560 - 2.573: 99.0550% ( 150) 00:12:29.652 2.573 - 2.587: 99.2976% ( 48) 00:12:29.652 2.587 - 2.600: 99.3481% ( 10) 00:12:29.652 2.613 - 2.627: 99.3582% ( 2) 00:12:29.652 4.533 - 4.560: 99.3683% ( 2) 00:12:29.652 4.587 - 4.613: 99.3784% ( 2) 00:12:29.652 4.613 - 4.640: 99.3835% ( 1) 00:12:29.652 4.640 - 4.667: 99.3936% ( 2) 00:12:29.652 4.667 - 4.693: 99.3986% ( 1) 00:12:29.652 4.693 - 4.720: 99.4037% ( 1) 00:12:29.652 4.747 - 4.773: 99.4087% ( 1) 00:12:29.652 4.773 - 4.800: 99.4188% ( 2) 00:12:29.652 4.827 - 4.853: 99.4391% ( 4) 00:12:29.652 4.853 - 4.880: 99.4441% ( 1) 00:12:29.652 4.880 - 4.907: 99.4492% ( 1) 00:12:29.652 4.907 - 4.933: 99.4542% ( 1) 00:12:29.652 4.933 - 4.960: 99.4593% ( 1) 00:12:29.652 4.987 - 5.013: 99.4694% ( 2) 00:12:29.652 5.013 - 5.040: 99.4744% ( 1) 00:12:29.652 5.040 - 5.067: 99.4845% ( 2) 00:12:29.652 5.067 - 5.093: 99.4997% ( 3) 00:12:29.652 5.120 - 5.147: 99.5098% ( 2) 00:12:29.652 5.173 - 5.200: 99.5199% ( 2) 00:12:29.652 5.253 - 5.280: 99.5250% ( 1) 00:12:29.652 5.307 - 5.333: 99.5300% ( 1) 00:12:29.652 5.440 - 5.467: 99.5351% ( 1) 00:12:29.652 5.813 - 5.840: 99.5502% ( 3) 00:12:29.652 5.840 - 5.867: 99.5553% ( 1) 00:12:29.652 5.973 - 6.000: 99.5654% ( 2) 00:12:29.652 6.160 - 6.187: 99.5704% ( 1) 00:12:29.652 6.187 - 6.213: 99.5755% ( 1) 00:12:29.652 6.240 - 6.267: 99.5806% ( 1) 00:12:29.652 6.613 - 6.640: 99.5907% ( 2) 00:12:29.652 6.693 - 6.720: 99.5957% ( 1) 00:12:29.652 6.747 - 6.773: 99.6008% ( 1) 00:12:29.652 6.773 - 6.800: 99.6058% ( 1) 00:12:29.652 6.827 - 6.880: 99.6109% ( 1) 00:12:29.652 6.880 - 6.933: 99.6159% ( 1) 00:12:29.652 9.067 - 9.120: 99.6210% ( 1) 00:12:29.652 12.693 - 12.747: 99.6260% ( 1) 00:12:29.652 13.013 - 13.067: 99.6311% ( 1) 00:12:29.652 43.733 - 43.947: 99.6361% ( 1) 00:12:29.652 171.520 - 172.373: 99.6412% ( 1) 00:12:29.652 3986.773 - 4014.080: 100.0000% ( 71) 00:12:29.652 00:12:29.652 10:58:26 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:29.652 10:58:26 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:29.652 10:58:26 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:29.652 10:58:26 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:29.652 10:58:26 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.652 [ 00:12:29.652 { 00:12:29.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.652 "subtype": "Discovery", 00:12:29.652 "listen_addresses": [], 00:12:29.652 "allow_any_host": true, 00:12:29.652 "hosts": [] 00:12:29.652 }, 00:12:29.652 { 00:12:29.652 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.652 "subtype": "NVMe", 00:12:29.652 "listen_addresses": [ 00:12:29.652 { 00:12:29.652 "trtype": "VFIOUSER", 00:12:29.652 "adrfam": "IPv4", 00:12:29.652 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.652 "trsvcid": "0" 00:12:29.652 } 00:12:29.652 ], 00:12:29.652 "allow_any_host": true, 00:12:29.652 "hosts": [], 00:12:29.652 "serial_number": "SPDK1", 00:12:29.652 "model_number": "SPDK bdev Controller", 00:12:29.652 "max_namespaces": 32, 00:12:29.652 "min_cntlid": 1, 00:12:29.652 "max_cntlid": 65519, 00:12:29.652 "namespaces": [ 00:12:29.652 { 00:12:29.652 "nsid": 1, 00:12:29.652 "bdev_name": "Malloc1", 00:12:29.652 "name": "Malloc1", 00:12:29.652 "nguid": "4B83DFFDD27848D6B9FFB908545A33F4", 00:12:29.652 "uuid": "4b83dffd-d278-48d6-b9ff-b908545a33f4" 00:12:29.652 } 00:12:29.652 ] 00:12:29.652 }, 00:12:29.652 { 00:12:29.652 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.652 "subtype": "NVMe", 00:12:29.652 "listen_addresses": [ 00:12:29.652 { 00:12:29.652 "trtype": "VFIOUSER", 00:12:29.652 "adrfam": "IPv4", 00:12:29.653 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.653 "trsvcid": "0" 00:12:29.653 } 00:12:29.653 ], 00:12:29.653 "allow_any_host": true, 00:12:29.653 "hosts": [], 00:12:29.653 "serial_number": "SPDK2", 00:12:29.653 "model_number": "SPDK bdev Controller", 00:12:29.653 "max_namespaces": 32, 00:12:29.653 "min_cntlid": 1, 00:12:29.653 "max_cntlid": 65519, 00:12:29.653 "namespaces": [ 00:12:29.653 { 00:12:29.653 "nsid": 1, 00:12:29.653 "bdev_name": "Malloc2", 00:12:29.653 "name": "Malloc2", 00:12:29.653 "nguid": "A21D43CD93494CB496E371B050F3B0EB", 00:12:29.653 "uuid": "a21d43cd-9349-4cb4-96e3-71b050f3b0eb" 00:12:29.653 } 00:12:29.653 ] 00:12:29.653 } 00:12:29.653 ] 00:12:29.653 10:58:26 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:29.653 10:58:26 -- target/nvmf_vfio_user.sh@34 -- # aerpid=258603 00:12:29.653 10:58:26 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:29.653 10:58:26 -- common/autotest_common.sh@1261 -- # local i=0 00:12:29.653 10:58:26 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:29.653 10:58:26 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.653 10:58:26 -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:12:29.653 10:58:26 -- common/autotest_common.sh@1264 -- # i=1 00:12:29.653 10:58:26 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:12:29.653 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.914 10:58:26 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.914 10:58:26 -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:12:29.914 10:58:26 -- common/autotest_common.sh@1264 -- # i=2 00:12:29.914 10:58:26 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:12:29.914 [2024-05-15 10:58:26.383031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.914 10:58:26 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.914 10:58:26 -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.914 10:58:26 -- common/autotest_common.sh@1272 -- # return 0 00:12:29.914 10:58:26 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:29.914 10:58:26 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:30.174 Malloc3 00:12:30.174 10:58:26 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:30.174 [2024-05-15 10:58:26.753540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:30.174 10:58:26 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:30.174 Asynchronous Event Request test 00:12:30.174 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:30.174 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:30.174 Registering asynchronous event callbacks... 00:12:30.174 Starting namespace attribute notice tests for all controllers... 00:12:30.174 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:30.174 aer_cb - Changed Namespace 00:12:30.174 Cleaning up... 00:12:30.436 [ 00:12:30.436 { 00:12:30.436 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:30.436 "subtype": "Discovery", 00:12:30.436 "listen_addresses": [], 00:12:30.436 "allow_any_host": true, 00:12:30.436 "hosts": [] 00:12:30.436 }, 00:12:30.436 { 00:12:30.436 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:30.436 "subtype": "NVMe", 00:12:30.436 "listen_addresses": [ 00:12:30.436 { 00:12:30.436 "trtype": "VFIOUSER", 00:12:30.436 "adrfam": "IPv4", 00:12:30.436 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:30.436 "trsvcid": "0" 00:12:30.436 } 00:12:30.436 ], 00:12:30.436 "allow_any_host": true, 00:12:30.436 "hosts": [], 00:12:30.436 "serial_number": "SPDK1", 00:12:30.436 "model_number": "SPDK bdev Controller", 00:12:30.436 "max_namespaces": 32, 00:12:30.436 "min_cntlid": 1, 00:12:30.436 "max_cntlid": 65519, 00:12:30.436 "namespaces": [ 00:12:30.436 { 00:12:30.436 "nsid": 1, 00:12:30.436 "bdev_name": "Malloc1", 00:12:30.436 "name": "Malloc1", 00:12:30.436 "nguid": "4B83DFFDD27848D6B9FFB908545A33F4", 00:12:30.436 "uuid": "4b83dffd-d278-48d6-b9ff-b908545a33f4" 00:12:30.436 }, 00:12:30.436 { 00:12:30.436 "nsid": 2, 00:12:30.436 "bdev_name": "Malloc3", 00:12:30.436 "name": "Malloc3", 00:12:30.436 "nguid": "AB645956F6AB46E2A593A4D9B028848E", 00:12:30.436 "uuid": "ab645956-f6ab-46e2-a593-a4d9b028848e" 00:12:30.436 } 00:12:30.436 ] 00:12:30.436 }, 00:12:30.436 { 00:12:30.436 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:30.436 "subtype": "NVMe", 00:12:30.436 "listen_addresses": [ 00:12:30.436 { 00:12:30.436 "trtype": "VFIOUSER", 00:12:30.436 "adrfam": "IPv4", 00:12:30.436 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:30.436 "trsvcid": "0" 00:12:30.436 } 00:12:30.436 ], 00:12:30.436 "allow_any_host": true, 00:12:30.436 "hosts": [], 00:12:30.436 "serial_number": "SPDK2", 00:12:30.436 "model_number": "SPDK bdev Controller", 00:12:30.436 "max_namespaces": 32, 00:12:30.436 "min_cntlid": 1, 00:12:30.436 "max_cntlid": 65519, 00:12:30.436 "namespaces": [ 00:12:30.436 { 00:12:30.436 "nsid": 1, 00:12:30.436 "bdev_name": "Malloc2", 00:12:30.436 "name": "Malloc2", 00:12:30.436 "nguid": "A21D43CD93494CB496E371B050F3B0EB", 00:12:30.436 "uuid": "a21d43cd-9349-4cb4-96e3-71b050f3b0eb" 00:12:30.436 } 00:12:30.436 ] 00:12:30.436 } 00:12:30.436 ] 00:12:30.436 10:58:26 -- target/nvmf_vfio_user.sh@44 -- # wait 258603 00:12:30.436 10:58:26 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.436 10:58:26 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:30.436 10:58:26 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:30.436 10:58:26 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:30.436 [2024-05-15 10:58:26.967757] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:12:30.436 [2024-05-15 10:58:26.967802] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258818 ] 00:12:30.436 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.436 [2024-05-15 10:58:26.999360] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:30.436 [2024-05-15 10:58:27.007763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:30.436 [2024-05-15 10:58:27.007783] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f18fd274000 00:12:30.436 [2024-05-15 10:58:27.008764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.009773] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.010776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.011788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.012796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.013806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.014813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.015817] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.436 [2024-05-15 10:58:27.016823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:30.436 [2024-05-15 10:58:27.016836] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f18fd269000 00:12:30.436 [2024-05-15 10:58:27.018159] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:30.436 [2024-05-15 10:58:27.038701] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:30.436 [2024-05-15 10:58:27.038723] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:30.436 [2024-05-15 10:58:27.040771] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:30.436 [2024-05-15 10:58:27.040817] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:30.436 [2024-05-15 10:58:27.040898] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:30.436 [2024-05-15 10:58:27.040914] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:30.436 [2024-05-15 10:58:27.040920] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:30.436 [2024-05-15 10:58:27.041781] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:30.437 [2024-05-15 10:58:27.041790] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:30.437 [2024-05-15 10:58:27.041797] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:30.437 [2024-05-15 10:58:27.042784] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:30.437 [2024-05-15 10:58:27.042792] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:30.437 [2024-05-15 10:58:27.042799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:30.437 [2024-05-15 10:58:27.043790] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:30.437 [2024-05-15 10:58:27.043799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:30.437 [2024-05-15 10:58:27.044800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:30.437 [2024-05-15 10:58:27.044808] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:30.437 [2024-05-15 10:58:27.044813] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:30.437 [2024-05-15 10:58:27.044819] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:30.437 [2024-05-15 10:58:27.044927] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:30.437 [2024-05-15 10:58:27.044932] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:30.437 [2024-05-15 10:58:27.044936] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:30.437 [2024-05-15 10:58:27.045807] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:30.437 [2024-05-15 10:58:27.046808] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:30.437 [2024-05-15 10:58:27.047827] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:30.437 [2024-05-15 10:58:27.048815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:30.437 [2024-05-15 10:58:27.048854] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:30.437 [2024-05-15 10:58:27.049830] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:30.437 [2024-05-15 10:58:27.049839] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:30.437 [2024-05-15 10:58:27.049844] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.049864] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:30.437 [2024-05-15 10:58:27.049872] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.049883] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.437 [2024-05-15 10:58:27.049888] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.437 [2024-05-15 10:58:27.049899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.437 [2024-05-15 10:58:27.056553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:30.437 [2024-05-15 10:58:27.056565] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:30.437 [2024-05-15 10:58:27.056570] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:30.437 [2024-05-15 10:58:27.056574] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:30.437 [2024-05-15 10:58:27.056579] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:30.437 [2024-05-15 10:58:27.056584] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:30.437 [2024-05-15 10:58:27.056588] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:30.437 [2024-05-15 10:58:27.056593] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.056600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.056613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:30.437 [2024-05-15 10:58:27.064551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:30.437 [2024-05-15 10:58:27.064563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.437 [2024-05-15 10:58:27.064571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.437 [2024-05-15 10:58:27.064580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.437 [2024-05-15 10:58:27.064588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.437 [2024-05-15 10:58:27.064593] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.064601] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.064611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:30.437 [2024-05-15 10:58:27.072550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:30.437 [2024-05-15 10:58:27.072558] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:30.437 [2024-05-15 10:58:27.072563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.072571] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.072577] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.072586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.437 [2024-05-15 10:58:27.080550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:30.437 [2024-05-15 10:58:27.080603] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.080611] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:30.437 [2024-05-15 10:58:27.080619] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:30.437 [2024-05-15 10:58:27.080623] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:30.437 [2024-05-15 10:58:27.080629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:30.699 [2024-05-15 10:58:27.088553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:30.699 [2024-05-15 10:58:27.088567] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:30.699 [2024-05-15 10:58:27.088579] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.088587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.088594] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.700 [2024-05-15 10:58:27.088600] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.700 [2024-05-15 10:58:27.088606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.096551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.096565] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.096572] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.096579] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.700 [2024-05-15 10:58:27.096584] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.700 [2024-05-15 10:58:27.096590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.104553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.104562] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.104569] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.104579] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.104584] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.104589] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.104594] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:30.700 [2024-05-15 10:58:27.104598] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:30.700 [2024-05-15 10:58:27.104603] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:30.700 [2024-05-15 10:58:27.104621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.112552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.112565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.120551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.120564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.128552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.128565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.136552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.136566] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:30.700 [2024-05-15 10:58:27.136571] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:30.700 [2024-05-15 10:58:27.136575] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:30.700 [2024-05-15 10:58:27.136578] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:30.700 [2024-05-15 10:58:27.136584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:30.700 [2024-05-15 10:58:27.136592] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:30.700 [2024-05-15 10:58:27.136596] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:30.700 [2024-05-15 10:58:27.136602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.136609] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:30.700 [2024-05-15 10:58:27.136613] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.700 [2024-05-15 10:58:27.136619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.136626] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:30.700 [2024-05-15 10:58:27.136630] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:30.700 [2024-05-15 10:58:27.136636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:30.700 [2024-05-15 10:58:27.144551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.144567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.144576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:30.700 [2024-05-15 10:58:27.144584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:30.700 ===================================================== 00:12:30.700 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:30.700 ===================================================== 00:12:30.700 Controller Capabilities/Features 00:12:30.700 ================================ 00:12:30.700 Vendor ID: 4e58 00:12:30.700 Subsystem Vendor ID: 4e58 00:12:30.700 Serial Number: SPDK2 00:12:30.700 Model Number: SPDK bdev Controller 00:12:30.700 Firmware Version: 24.05 00:12:30.700 Recommended Arb Burst: 6 00:12:30.700 IEEE OUI Identifier: 8d 6b 50 00:12:30.700 Multi-path I/O 00:12:30.700 May have multiple subsystem ports: Yes 00:12:30.700 May have multiple controllers: Yes 00:12:30.700 Associated with SR-IOV VF: No 00:12:30.700 Max Data Transfer Size: 131072 00:12:30.700 Max Number of Namespaces: 32 00:12:30.700 Max Number of I/O Queues: 127 00:12:30.700 NVMe Specification Version (VS): 1.3 00:12:30.700 NVMe Specification Version (Identify): 1.3 00:12:30.700 Maximum Queue Entries: 256 00:12:30.700 Contiguous Queues Required: Yes 00:12:30.700 Arbitration Mechanisms Supported 00:12:30.700 Weighted Round Robin: Not Supported 00:12:30.700 Vendor Specific: Not Supported 00:12:30.700 Reset Timeout: 15000 ms 00:12:30.700 Doorbell Stride: 4 bytes 00:12:30.700 NVM Subsystem Reset: Not Supported 00:12:30.700 Command Sets Supported 00:12:30.700 NVM Command Set: Supported 00:12:30.700 Boot Partition: Not Supported 00:12:30.700 Memory Page Size Minimum: 4096 bytes 00:12:30.700 Memory Page Size Maximum: 4096 bytes 00:12:30.700 Persistent Memory Region: Not Supported 00:12:30.700 Optional Asynchronous Events Supported 00:12:30.700 Namespace Attribute Notices: Supported 00:12:30.700 Firmware Activation Notices: Not Supported 00:12:30.700 ANA Change Notices: Not Supported 00:12:30.700 PLE Aggregate Log Change Notices: Not Supported 00:12:30.700 LBA Status Info Alert Notices: Not Supported 00:12:30.700 EGE Aggregate Log Change Notices: Not Supported 00:12:30.700 Normal NVM Subsystem Shutdown event: Not Supported 00:12:30.700 Zone Descriptor Change Notices: Not Supported 00:12:30.700 Discovery Log Change Notices: Not Supported 00:12:30.700 Controller Attributes 00:12:30.700 128-bit Host Identifier: Supported 00:12:30.700 Non-Operational Permissive Mode: Not Supported 00:12:30.700 NVM Sets: Not Supported 00:12:30.700 Read Recovery Levels: Not Supported 00:12:30.700 Endurance Groups: Not Supported 00:12:30.700 Predictable Latency Mode: Not Supported 00:12:30.700 Traffic Based Keep ALive: Not Supported 00:12:30.700 Namespace Granularity: Not Supported 00:12:30.700 SQ Associations: Not Supported 00:12:30.700 UUID List: Not Supported 00:12:30.700 Multi-Domain Subsystem: Not Supported 00:12:30.700 Fixed Capacity Management: Not Supported 00:12:30.700 Variable Capacity Management: Not Supported 00:12:30.700 Delete Endurance Group: Not Supported 00:12:30.700 Delete NVM Set: Not Supported 00:12:30.700 Extended LBA Formats Supported: Not Supported 00:12:30.700 Flexible Data Placement Supported: Not Supported 00:12:30.700 00:12:30.700 Controller Memory Buffer Support 00:12:30.700 ================================ 00:12:30.700 Supported: No 00:12:30.700 00:12:30.700 Persistent Memory Region Support 00:12:30.700 ================================ 00:12:30.700 Supported: No 00:12:30.700 00:12:30.700 Admin Command Set Attributes 00:12:30.700 ============================ 00:12:30.700 Security Send/Receive: Not Supported 00:12:30.700 Format NVM: Not Supported 00:12:30.700 Firmware Activate/Download: Not Supported 00:12:30.700 Namespace Management: Not Supported 00:12:30.700 Device Self-Test: Not Supported 00:12:30.700 Directives: Not Supported 00:12:30.700 NVMe-MI: Not Supported 00:12:30.700 Virtualization Management: Not Supported 00:12:30.700 Doorbell Buffer Config: Not Supported 00:12:30.700 Get LBA Status Capability: Not Supported 00:12:30.700 Command & Feature Lockdown Capability: Not Supported 00:12:30.700 Abort Command Limit: 4 00:12:30.700 Async Event Request Limit: 4 00:12:30.700 Number of Firmware Slots: N/A 00:12:30.700 Firmware Slot 1 Read-Only: N/A 00:12:30.701 Firmware Activation Without Reset: N/A 00:12:30.701 Multiple Update Detection Support: N/A 00:12:30.701 Firmware Update Granularity: No Information Provided 00:12:30.701 Per-Namespace SMART Log: No 00:12:30.701 Asymmetric Namespace Access Log Page: Not Supported 00:12:30.701 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:30.701 Command Effects Log Page: Supported 00:12:30.701 Get Log Page Extended Data: Supported 00:12:30.701 Telemetry Log Pages: Not Supported 00:12:30.701 Persistent Event Log Pages: Not Supported 00:12:30.701 Supported Log Pages Log Page: May Support 00:12:30.701 Commands Supported & Effects Log Page: Not Supported 00:12:30.701 Feature Identifiers & Effects Log Page:May Support 00:12:30.701 NVMe-MI Commands & Effects Log Page: May Support 00:12:30.701 Data Area 4 for Telemetry Log: Not Supported 00:12:30.701 Error Log Page Entries Supported: 128 00:12:30.701 Keep Alive: Supported 00:12:30.701 Keep Alive Granularity: 10000 ms 00:12:30.701 00:12:30.701 NVM Command Set Attributes 00:12:30.701 ========================== 00:12:30.701 Submission Queue Entry Size 00:12:30.701 Max: 64 00:12:30.701 Min: 64 00:12:30.701 Completion Queue Entry Size 00:12:30.701 Max: 16 00:12:30.701 Min: 16 00:12:30.701 Number of Namespaces: 32 00:12:30.701 Compare Command: Supported 00:12:30.701 Write Uncorrectable Command: Not Supported 00:12:30.701 Dataset Management Command: Supported 00:12:30.701 Write Zeroes Command: Supported 00:12:30.701 Set Features Save Field: Not Supported 00:12:30.701 Reservations: Not Supported 00:12:30.701 Timestamp: Not Supported 00:12:30.701 Copy: Supported 00:12:30.701 Volatile Write Cache: Present 00:12:30.701 Atomic Write Unit (Normal): 1 00:12:30.701 Atomic Write Unit (PFail): 1 00:12:30.701 Atomic Compare & Write Unit: 1 00:12:30.701 Fused Compare & Write: Supported 00:12:30.701 Scatter-Gather List 00:12:30.701 SGL Command Set: Supported (Dword aligned) 00:12:30.701 SGL Keyed: Not Supported 00:12:30.701 SGL Bit Bucket Descriptor: Not Supported 00:12:30.701 SGL Metadata Pointer: Not Supported 00:12:30.701 Oversized SGL: Not Supported 00:12:30.701 SGL Metadata Address: Not Supported 00:12:30.701 SGL Offset: Not Supported 00:12:30.701 Transport SGL Data Block: Not Supported 00:12:30.701 Replay Protected Memory Block: Not Supported 00:12:30.701 00:12:30.701 Firmware Slot Information 00:12:30.701 ========================= 00:12:30.701 Active slot: 1 00:12:30.701 Slot 1 Firmware Revision: 24.05 00:12:30.701 00:12:30.701 00:12:30.701 Commands Supported and Effects 00:12:30.701 ============================== 00:12:30.701 Admin Commands 00:12:30.701 -------------- 00:12:30.701 Get Log Page (02h): Supported 00:12:30.701 Identify (06h): Supported 00:12:30.701 Abort (08h): Supported 00:12:30.701 Set Features (09h): Supported 00:12:30.701 Get Features (0Ah): Supported 00:12:30.701 Asynchronous Event Request (0Ch): Supported 00:12:30.701 Keep Alive (18h): Supported 00:12:30.701 I/O Commands 00:12:30.701 ------------ 00:12:30.701 Flush (00h): Supported LBA-Change 00:12:30.701 Write (01h): Supported LBA-Change 00:12:30.701 Read (02h): Supported 00:12:30.701 Compare (05h): Supported 00:12:30.701 Write Zeroes (08h): Supported LBA-Change 00:12:30.701 Dataset Management (09h): Supported LBA-Change 00:12:30.701 Copy (19h): Supported LBA-Change 00:12:30.701 Unknown (79h): Supported LBA-Change 00:12:30.701 Unknown (7Ah): Supported 00:12:30.701 00:12:30.701 Error Log 00:12:30.701 ========= 00:12:30.701 00:12:30.701 Arbitration 00:12:30.701 =========== 00:12:30.701 Arbitration Burst: 1 00:12:30.701 00:12:30.701 Power Management 00:12:30.701 ================ 00:12:30.701 Number of Power States: 1 00:12:30.701 Current Power State: Power State #0 00:12:30.701 Power State #0: 00:12:30.701 Max Power: 0.00 W 00:12:30.701 Non-Operational State: Operational 00:12:30.701 Entry Latency: Not Reported 00:12:30.701 Exit Latency: Not Reported 00:12:30.701 Relative Read Throughput: 0 00:12:30.701 Relative Read Latency: 0 00:12:30.701 Relative Write Throughput: 0 00:12:30.701 Relative Write Latency: 0 00:12:30.701 Idle Power: Not Reported 00:12:30.701 Active Power: Not Reported 00:12:30.701 Non-Operational Permissive Mode: Not Supported 00:12:30.701 00:12:30.701 Health Information 00:12:30.701 ================== 00:12:30.701 Critical Warnings: 00:12:30.701 Available Spare Space: OK 00:12:30.701 Temperature: OK 00:12:30.701 Device Reliability: OK 00:12:30.701 Read Only: No 00:12:30.701 Volatile Memory Backup: OK 00:12:30.701 Current Temperature: 0 Kelvin (-2[2024-05-15 10:58:27.144685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:30.701 [2024-05-15 10:58:27.152552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:30.701 [2024-05-15 10:58:27.152584] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:30.701 [2024-05-15 10:58:27.152594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.701 [2024-05-15 10:58:27.152600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.701 [2024-05-15 10:58:27.152606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.701 [2024-05-15 10:58:27.152612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.701 [2024-05-15 10:58:27.152649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:30.701 [2024-05-15 10:58:27.152659] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:30.701 [2024-05-15 10:58:27.153660] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:30.701 [2024-05-15 10:58:27.153710] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:30.701 [2024-05-15 10:58:27.153718] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:30.701 [2024-05-15 10:58:27.154663] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:30.701 [2024-05-15 10:58:27.154675] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:30.701 [2024-05-15 10:58:27.154723] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:30.701 [2024-05-15 10:58:27.157554] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:30.701 73 Celsius) 00:12:30.701 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:30.701 Available Spare: 0% 00:12:30.701 Available Spare Threshold: 0% 00:12:30.701 Life Percentage Used: 0% 00:12:30.701 Data Units Read: 0 00:12:30.701 Data Units Written: 0 00:12:30.701 Host Read Commands: 0 00:12:30.701 Host Write Commands: 0 00:12:30.701 Controller Busy Time: 0 minutes 00:12:30.701 Power Cycles: 0 00:12:30.701 Power On Hours: 0 hours 00:12:30.701 Unsafe Shutdowns: 0 00:12:30.701 Unrecoverable Media Errors: 0 00:12:30.701 Lifetime Error Log Entries: 0 00:12:30.701 Warning Temperature Time: 0 minutes 00:12:30.701 Critical Temperature Time: 0 minutes 00:12:30.701 00:12:30.701 Number of Queues 00:12:30.701 ================ 00:12:30.701 Number of I/O Submission Queues: 127 00:12:30.701 Number of I/O Completion Queues: 127 00:12:30.701 00:12:30.701 Active Namespaces 00:12:30.701 ================= 00:12:30.701 Namespace ID:1 00:12:30.701 Error Recovery Timeout: Unlimited 00:12:30.701 Command Set Identifier: NVM (00h) 00:12:30.701 Deallocate: Supported 00:12:30.701 Deallocated/Unwritten Error: Not Supported 00:12:30.701 Deallocated Read Value: Unknown 00:12:30.701 Deallocate in Write Zeroes: Not Supported 00:12:30.701 Deallocated Guard Field: 0xFFFF 00:12:30.701 Flush: Supported 00:12:30.701 Reservation: Supported 00:12:30.701 Namespace Sharing Capabilities: Multiple Controllers 00:12:30.701 Size (in LBAs): 131072 (0GiB) 00:12:30.701 Capacity (in LBAs): 131072 (0GiB) 00:12:30.701 Utilization (in LBAs): 131072 (0GiB) 00:12:30.701 NGUID: A21D43CD93494CB496E371B050F3B0EB 00:12:30.701 UUID: a21d43cd-9349-4cb4-96e3-71b050f3b0eb 00:12:30.701 Thin Provisioning: Not Supported 00:12:30.701 Per-NS Atomic Units: Yes 00:12:30.701 Atomic Boundary Size (Normal): 0 00:12:30.701 Atomic Boundary Size (PFail): 0 00:12:30.701 Atomic Boundary Offset: 0 00:12:30.701 Maximum Single Source Range Length: 65535 00:12:30.701 Maximum Copy Length: 65535 00:12:30.701 Maximum Source Range Count: 1 00:12:30.701 NGUID/EUI64 Never Reused: No 00:12:30.701 Namespace Write Protected: No 00:12:30.701 Number of LBA Formats: 1 00:12:30.701 Current LBA Format: LBA Format #00 00:12:30.701 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:30.701 00:12:30.701 10:58:27 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:30.701 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.701 [2024-05-15 10:58:27.345297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:36.020 [2024-05-15 10:58:32.458738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:36.020 Initializing NVMe Controllers 00:12:36.020 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:36.020 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:36.020 Initialization complete. Launching workers. 00:12:36.020 ======================================================== 00:12:36.020 Latency(us) 00:12:36.020 Device Information : IOPS MiB/s Average min max 00:12:36.020 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40795.60 159.36 3139.21 815.43 6847.10 00:12:36.020 ======================================================== 00:12:36.020 Total : 40795.60 159.36 3139.21 815.43 6847.10 00:12:36.020 00:12:36.020 10:58:32 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:36.020 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.020 [2024-05-15 10:58:32.638310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.309 [2024-05-15 10:58:37.659316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.309 Initializing NVMe Controllers 00:12:41.309 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.309 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:41.309 Initialization complete. Launching workers. 00:12:41.309 ======================================================== 00:12:41.309 Latency(us) 00:12:41.309 Device Information : IOPS MiB/s Average min max 00:12:41.309 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38186.99 149.17 3351.23 1077.16 7442.62 00:12:41.309 ======================================================== 00:12:41.309 Total : 38186.99 149.17 3351.23 1077.16 7442.62 00:12:41.309 00:12:41.309 10:58:37 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:41.309 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.309 [2024-05-15 10:58:37.848495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.604 [2024-05-15 10:58:42.993660] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.604 Initializing NVMe Controllers 00:12:46.604 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.604 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.604 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:46.604 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:46.604 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:46.604 Initialization complete. Launching workers. 00:12:46.604 Starting thread on core 2 00:12:46.604 Starting thread on core 3 00:12:46.604 Starting thread on core 1 00:12:46.604 10:58:43 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:46.604 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.604 [2024-05-15 10:58:43.248035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.906 [2024-05-15 10:58:46.315828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.906 Initializing NVMe Controllers 00:12:49.906 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.906 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.906 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:49.906 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:49.906 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:49.906 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:49.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:49.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:49.906 Initialization complete. Launching workers. 00:12:49.906 Starting thread on core 1 with urgent priority queue 00:12:49.906 Starting thread on core 2 with urgent priority queue 00:12:49.906 Starting thread on core 3 with urgent priority queue 00:12:49.906 Starting thread on core 0 with urgent priority queue 00:12:49.906 SPDK bdev Controller (SPDK2 ) core 0: 7732.67 IO/s 12.93 secs/100000 ios 00:12:49.906 SPDK bdev Controller (SPDK2 ) core 1: 14156.67 IO/s 7.06 secs/100000 ios 00:12:49.906 SPDK bdev Controller (SPDK2 ) core 2: 10473.33 IO/s 9.55 secs/100000 ios 00:12:49.906 SPDK bdev Controller (SPDK2 ) core 3: 9357.00 IO/s 10.69 secs/100000 ios 00:12:49.906 ======================================================== 00:12:49.906 00:12:49.906 10:58:46 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.906 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.167 [2024-05-15 10:58:46.574038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.167 [2024-05-15 10:58:46.583087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.167 Initializing NVMe Controllers 00:12:50.167 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.167 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.167 Namespace ID: 1 size: 0GB 00:12:50.167 Initialization complete. 00:12:50.167 INFO: using host memory buffer for IO 00:12:50.167 Hello world! 00:12:50.167 10:58:46 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:50.167 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.428 [2024-05-15 10:58:46.836838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.374 Initializing NVMe Controllers 00:12:51.374 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.374 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.374 Initialization complete. Launching workers. 00:12:51.374 submit (in ns) avg, min, max = 8378.2, 3941.7, 4000588.3 00:12:51.374 complete (in ns) avg, min, max = 19434.4, 2384.2, 4994729.2 00:12:51.374 00:12:51.374 Submit histogram 00:12:51.374 ================ 00:12:51.374 Range in us Cumulative Count 00:12:51.374 3.920 - 3.947: 0.0560% ( 11) 00:12:51.374 3.947 - 3.973: 2.8684% ( 552) 00:12:51.374 3.973 - 4.000: 10.7601% ( 1549) 00:12:51.374 4.000 - 4.027: 21.1433% ( 2038) 00:12:51.374 4.027 - 4.053: 31.1901% ( 1972) 00:12:51.374 4.053 - 4.080: 42.2101% ( 2163) 00:12:51.374 4.080 - 4.107: 55.0999% ( 2530) 00:12:51.374 4.107 - 4.133: 72.1673% ( 3350) 00:12:51.374 4.133 - 4.160: 86.2034% ( 2755) 00:12:51.374 4.160 - 4.187: 94.2786% ( 1585) 00:12:51.374 4.187 - 4.213: 97.8194% ( 695) 00:12:51.374 4.213 - 4.240: 99.0371% ( 239) 00:12:51.374 4.240 - 4.267: 99.3733% ( 66) 00:12:51.374 4.267 - 4.293: 99.4701% ( 19) 00:12:51.374 4.293 - 4.320: 99.4854% ( 3) 00:12:51.374 4.320 - 4.347: 99.4905% ( 1) 00:12:51.374 4.560 - 4.587: 99.4956% ( 1) 00:12:51.374 4.720 - 4.747: 99.5007% ( 1) 00:12:51.374 5.040 - 5.067: 99.5058% ( 1) 00:12:51.374 5.493 - 5.520: 99.5160% ( 2) 00:12:51.374 5.573 - 5.600: 99.5211% ( 1) 00:12:51.374 5.707 - 5.733: 99.5262% ( 1) 00:12:51.374 5.787 - 5.813: 99.5313% ( 1) 00:12:51.374 5.813 - 5.840: 99.5364% ( 1) 00:12:51.374 6.000 - 6.027: 99.5415% ( 1) 00:12:51.374 6.053 - 6.080: 99.5568% ( 3) 00:12:51.374 6.107 - 6.133: 99.5619% ( 1) 00:12:51.374 6.133 - 6.160: 99.5720% ( 2) 00:12:51.374 6.160 - 6.187: 99.5771% ( 1) 00:12:51.374 6.187 - 6.213: 99.5822% ( 1) 00:12:51.374 6.267 - 6.293: 99.5924% ( 2) 00:12:51.374 6.293 - 6.320: 99.6077% ( 3) 00:12:51.374 6.347 - 6.373: 99.6179% ( 2) 00:12:51.374 6.373 - 6.400: 99.6230% ( 1) 00:12:51.374 6.427 - 6.453: 99.6383% ( 3) 00:12:51.374 6.453 - 6.480: 99.6434% ( 1) 00:12:51.374 6.480 - 6.507: 99.6536% ( 2) 00:12:51.374 6.507 - 6.533: 99.6688% ( 3) 00:12:51.374 6.533 - 6.560: 99.6739% ( 1) 00:12:51.374 6.560 - 6.587: 99.6790% ( 1) 00:12:51.374 6.613 - 6.640: 99.6892% ( 2) 00:12:51.374 6.667 - 6.693: 99.6943% ( 1) 00:12:51.374 6.693 - 6.720: 99.7045% ( 2) 00:12:51.374 6.720 - 6.747: 99.7147% ( 2) 00:12:51.374 6.747 - 6.773: 99.7198% ( 1) 00:12:51.374 6.800 - 6.827: 99.7249% ( 1) 00:12:51.374 6.827 - 6.880: 99.7351% ( 2) 00:12:51.374 6.880 - 6.933: 99.7402% ( 1) 00:12:51.374 6.933 - 6.987: 99.7453% ( 1) 00:12:51.374 6.987 - 7.040: 99.7555% ( 2) 00:12:51.374 7.147 - 7.200: 99.7656% ( 2) 00:12:51.374 7.200 - 7.253: 99.7707% ( 1) 00:12:51.374 7.253 - 7.307: 99.7758% ( 1) 00:12:51.374 7.307 - 7.360: 99.7809% ( 1) 00:12:51.374 7.360 - 7.413: 99.7860% ( 1) 00:12:51.374 7.413 - 7.467: 99.7911% ( 1) 00:12:51.374 7.467 - 7.520: 99.8064% ( 3) 00:12:51.374 7.520 - 7.573: 99.8115% ( 1) 00:12:51.374 7.573 - 7.627: 99.8166% ( 1) 00:12:51.374 7.627 - 7.680: 99.8217% ( 1) 00:12:51.374 7.840 - 7.893: 99.8319% ( 2) 00:12:51.374 8.000 - 8.053: 99.8370% ( 1) 00:12:51.374 8.053 - 8.107: 99.8421% ( 1) 00:12:51.374 8.747 - 8.800: 99.8472% ( 1) 00:12:51.374 8.960 - 9.013: 99.8523% ( 1) 00:12:51.374 9.067 - 9.120: 99.8573% ( 1) 00:12:51.374 9.227 - 9.280: 99.8624% ( 1) 00:12:51.374 9.280 - 9.333: 99.8675% ( 1) 00:12:51.374 11.947 - 12.000: 99.8726% ( 1) 00:12:51.374 12.053 - 12.107: 99.8777% ( 1) 00:12:51.374 12.587 - 12.640: 99.8828% ( 1) 00:12:51.374 13.013 - 13.067: 99.8879% ( 1) 00:12:51.374 84.907 - 85.333: 99.8930% ( 1) 00:12:51.374 3986.773 - 4014.080: 100.0000% ( 21) 00:12:51.374 00:12:51.374 Complete histogram 00:12:51.374 ================== 00:12:51.374 Range in us Cumulative Count 00:12:51.374 2.373 - 2.387: 0.0051% ( 1) 00:12:51.374 2.387 - 2.400: 0.4229% ( 82) 00:12:51.374 2.400 - 2.413: 1.1056% ( 134) 00:12:51.374 2.413 - [2024-05-15 10:58:47.932246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.374 2.427: 1.1871% ( 16) 00:12:51.374 2.427 - 2.440: 1.2686% ( 16) 00:12:51.374 2.440 - 2.453: 1.2839% ( 3) 00:12:51.374 2.453 - 2.467: 45.4606% ( 8671) 00:12:51.374 2.467 - 2.480: 64.5914% ( 3755) 00:12:51.374 2.480 - 2.493: 73.5786% ( 1764) 00:12:51.374 2.493 - 2.507: 79.6872% ( 1199) 00:12:51.374 2.507 - 2.520: 81.8779% ( 430) 00:12:51.374 2.520 - 2.533: 85.0265% ( 618) 00:12:51.374 2.533 - 2.547: 90.3200% ( 1039) 00:12:51.374 2.547 - 2.560: 95.3026% ( 978) 00:12:51.374 2.560 - 2.573: 97.5189% ( 435) 00:12:51.374 2.573 - 2.587: 98.5989% ( 212) 00:12:51.374 2.587 - 2.600: 99.1033% ( 99) 00:12:51.374 2.600 - 2.613: 99.2664% ( 32) 00:12:51.374 2.613 - 2.627: 99.3122% ( 9) 00:12:51.374 2.627 - 2.640: 99.3173% ( 1) 00:12:51.374 2.680 - 2.693: 99.3224% ( 1) 00:12:51.374 4.560 - 4.587: 99.3275% ( 1) 00:12:51.374 4.587 - 4.613: 99.3326% ( 1) 00:12:51.374 4.667 - 4.693: 99.3581% ( 5) 00:12:51.374 4.693 - 4.720: 99.3632% ( 1) 00:12:51.374 4.720 - 4.747: 99.3682% ( 1) 00:12:51.374 4.747 - 4.773: 99.3733% ( 1) 00:12:51.374 4.800 - 4.827: 99.3784% ( 1) 00:12:51.374 4.853 - 4.880: 99.3835% ( 1) 00:12:51.374 4.907 - 4.933: 99.3886% ( 1) 00:12:51.374 4.933 - 4.960: 99.3988% ( 2) 00:12:51.374 4.960 - 4.987: 99.4039% ( 1) 00:12:51.374 4.987 - 5.013: 99.4090% ( 1) 00:12:51.374 5.067 - 5.093: 99.4141% ( 1) 00:12:51.374 5.200 - 5.227: 99.4345% ( 4) 00:12:51.374 5.280 - 5.307: 99.4396% ( 1) 00:12:51.374 5.467 - 5.493: 99.4447% ( 1) 00:12:51.374 5.493 - 5.520: 99.4498% ( 1) 00:12:51.374 5.573 - 5.600: 99.4549% ( 1) 00:12:51.374 5.600 - 5.627: 99.4600% ( 1) 00:12:51.374 5.627 - 5.653: 99.4650% ( 1) 00:12:51.374 5.653 - 5.680: 99.4701% ( 1) 00:12:51.374 5.707 - 5.733: 99.4803% ( 2) 00:12:51.374 5.760 - 5.787: 99.4854% ( 1) 00:12:51.374 5.867 - 5.893: 99.4905% ( 1) 00:12:51.374 5.973 - 6.000: 99.4956% ( 1) 00:12:51.374 6.000 - 6.027: 99.5058% ( 2) 00:12:51.374 6.187 - 6.213: 99.5109% ( 1) 00:12:51.374 6.213 - 6.240: 99.5160% ( 1) 00:12:51.374 6.533 - 6.560: 99.5211% ( 1) 00:12:51.374 6.613 - 6.640: 99.5262% ( 1) 00:12:51.374 6.667 - 6.693: 99.5313% ( 1) 00:12:51.374 6.827 - 6.880: 99.5364% ( 1) 00:12:51.374 6.933 - 6.987: 99.5466% ( 2) 00:12:51.374 7.253 - 7.307: 99.5517% ( 1) 00:12:51.374 7.360 - 7.413: 99.5568% ( 1) 00:12:51.374 8.480 - 8.533: 99.5619% ( 1) 00:12:51.374 8.693 - 8.747: 99.5669% ( 1) 00:12:51.374 14.187 - 14.293: 99.5720% ( 1) 00:12:51.374 17.280 - 17.387: 99.5771% ( 1) 00:12:51.374 3986.773 - 4014.080: 99.9949% ( 82) 00:12:51.374 4969.813 - 4997.120: 100.0000% ( 1) 00:12:51.374 00:12:51.374 10:58:47 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:51.374 10:58:47 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:51.374 10:58:47 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:51.375 10:58:47 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:51.375 10:58:47 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.637 [ 00:12:51.637 { 00:12:51.637 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.637 "subtype": "Discovery", 00:12:51.637 "listen_addresses": [], 00:12:51.637 "allow_any_host": true, 00:12:51.637 "hosts": [] 00:12:51.637 }, 00:12:51.637 { 00:12:51.637 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.637 "subtype": "NVMe", 00:12:51.637 "listen_addresses": [ 00:12:51.637 { 00:12:51.637 "trtype": "VFIOUSER", 00:12:51.637 "adrfam": "IPv4", 00:12:51.637 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.637 "trsvcid": "0" 00:12:51.637 } 00:12:51.637 ], 00:12:51.637 "allow_any_host": true, 00:12:51.637 "hosts": [], 00:12:51.637 "serial_number": "SPDK1", 00:12:51.637 "model_number": "SPDK bdev Controller", 00:12:51.637 "max_namespaces": 32, 00:12:51.637 "min_cntlid": 1, 00:12:51.637 "max_cntlid": 65519, 00:12:51.637 "namespaces": [ 00:12:51.637 { 00:12:51.637 "nsid": 1, 00:12:51.637 "bdev_name": "Malloc1", 00:12:51.637 "name": "Malloc1", 00:12:51.637 "nguid": "4B83DFFDD27848D6B9FFB908545A33F4", 00:12:51.637 "uuid": "4b83dffd-d278-48d6-b9ff-b908545a33f4" 00:12:51.637 }, 00:12:51.637 { 00:12:51.637 "nsid": 2, 00:12:51.637 "bdev_name": "Malloc3", 00:12:51.637 "name": "Malloc3", 00:12:51.637 "nguid": "AB645956F6AB46E2A593A4D9B028848E", 00:12:51.637 "uuid": "ab645956-f6ab-46e2-a593-a4d9b028848e" 00:12:51.637 } 00:12:51.637 ] 00:12:51.637 }, 00:12:51.637 { 00:12:51.637 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.637 "subtype": "NVMe", 00:12:51.637 "listen_addresses": [ 00:12:51.637 { 00:12:51.637 "trtype": "VFIOUSER", 00:12:51.637 "adrfam": "IPv4", 00:12:51.637 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.637 "trsvcid": "0" 00:12:51.637 } 00:12:51.637 ], 00:12:51.637 "allow_any_host": true, 00:12:51.637 "hosts": [], 00:12:51.637 "serial_number": "SPDK2", 00:12:51.637 "model_number": "SPDK bdev Controller", 00:12:51.637 "max_namespaces": 32, 00:12:51.637 "min_cntlid": 1, 00:12:51.637 "max_cntlid": 65519, 00:12:51.637 "namespaces": [ 00:12:51.637 { 00:12:51.637 "nsid": 1, 00:12:51.637 "bdev_name": "Malloc2", 00:12:51.637 "name": "Malloc2", 00:12:51.637 "nguid": "A21D43CD93494CB496E371B050F3B0EB", 00:12:51.637 "uuid": "a21d43cd-9349-4cb4-96e3-71b050f3b0eb" 00:12:51.637 } 00:12:51.637 ] 00:12:51.637 } 00:12:51.637 ] 00:12:51.637 10:58:48 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:51.637 10:58:48 -- target/nvmf_vfio_user.sh@34 -- # aerpid=262947 00:12:51.637 10:58:48 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:51.637 10:58:48 -- common/autotest_common.sh@1261 -- # local i=0 00:12:51.637 10:58:48 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:51.637 10:58:48 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.637 10:58:48 -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:12:51.637 10:58:48 -- common/autotest_common.sh@1264 -- # i=1 00:12:51.637 10:58:48 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:12:51.637 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.637 10:58:48 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.637 10:58:48 -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:12:51.637 10:58:48 -- common/autotest_common.sh@1264 -- # i=2 00:12:51.637 10:58:48 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:12:51.898 [2024-05-15 10:58:48.314472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.898 10:58:48 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.898 10:58:48 -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.898 10:58:48 -- common/autotest_common.sh@1272 -- # return 0 00:12:51.898 10:58:48 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:51.898 10:58:48 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:51.898 Malloc4 00:12:51.898 10:58:48 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:52.159 [2024-05-15 10:58:48.676820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.159 10:58:48 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:52.159 Asynchronous Event Request test 00:12:52.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:52.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:52.159 Registering asynchronous event callbacks... 00:12:52.159 Starting namespace attribute notice tests for all controllers... 00:12:52.159 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:52.159 aer_cb - Changed Namespace 00:12:52.159 Cleaning up... 00:12:52.420 [ 00:12:52.420 { 00:12:52.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:52.420 "subtype": "Discovery", 00:12:52.420 "listen_addresses": [], 00:12:52.420 "allow_any_host": true, 00:12:52.420 "hosts": [] 00:12:52.420 }, 00:12:52.420 { 00:12:52.420 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:52.420 "subtype": "NVMe", 00:12:52.420 "listen_addresses": [ 00:12:52.420 { 00:12:52.420 "trtype": "VFIOUSER", 00:12:52.420 "adrfam": "IPv4", 00:12:52.420 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:52.420 "trsvcid": "0" 00:12:52.420 } 00:12:52.420 ], 00:12:52.420 "allow_any_host": true, 00:12:52.420 "hosts": [], 00:12:52.420 "serial_number": "SPDK1", 00:12:52.420 "model_number": "SPDK bdev Controller", 00:12:52.420 "max_namespaces": 32, 00:12:52.420 "min_cntlid": 1, 00:12:52.420 "max_cntlid": 65519, 00:12:52.420 "namespaces": [ 00:12:52.420 { 00:12:52.420 "nsid": 1, 00:12:52.420 "bdev_name": "Malloc1", 00:12:52.420 "name": "Malloc1", 00:12:52.420 "nguid": "4B83DFFDD27848D6B9FFB908545A33F4", 00:12:52.420 "uuid": "4b83dffd-d278-48d6-b9ff-b908545a33f4" 00:12:52.420 }, 00:12:52.420 { 00:12:52.420 "nsid": 2, 00:12:52.420 "bdev_name": "Malloc3", 00:12:52.420 "name": "Malloc3", 00:12:52.420 "nguid": "AB645956F6AB46E2A593A4D9B028848E", 00:12:52.420 "uuid": "ab645956-f6ab-46e2-a593-a4d9b028848e" 00:12:52.420 } 00:12:52.420 ] 00:12:52.420 }, 00:12:52.420 { 00:12:52.420 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:52.420 "subtype": "NVMe", 00:12:52.420 "listen_addresses": [ 00:12:52.420 { 00:12:52.420 "trtype": "VFIOUSER", 00:12:52.420 "adrfam": "IPv4", 00:12:52.420 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:52.420 "trsvcid": "0" 00:12:52.420 } 00:12:52.420 ], 00:12:52.420 "allow_any_host": true, 00:12:52.420 "hosts": [], 00:12:52.420 "serial_number": "SPDK2", 00:12:52.420 "model_number": "SPDK bdev Controller", 00:12:52.420 "max_namespaces": 32, 00:12:52.420 "min_cntlid": 1, 00:12:52.420 "max_cntlid": 65519, 00:12:52.420 "namespaces": [ 00:12:52.420 { 00:12:52.420 "nsid": 1, 00:12:52.420 "bdev_name": "Malloc2", 00:12:52.420 "name": "Malloc2", 00:12:52.420 "nguid": "A21D43CD93494CB496E371B050F3B0EB", 00:12:52.420 "uuid": "a21d43cd-9349-4cb4-96e3-71b050f3b0eb" 00:12:52.420 }, 00:12:52.420 { 00:12:52.420 "nsid": 2, 00:12:52.420 "bdev_name": "Malloc4", 00:12:52.420 "name": "Malloc4", 00:12:52.420 "nguid": "1A9F6078C13E4946963BD977F5DF1B7C", 00:12:52.420 "uuid": "1a9f6078-c13e-4946-963b-d977f5df1b7c" 00:12:52.420 } 00:12:52.420 ] 00:12:52.420 } 00:12:52.420 ] 00:12:52.420 10:58:48 -- target/nvmf_vfio_user.sh@44 -- # wait 262947 00:12:52.420 10:58:48 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:52.420 10:58:48 -- target/nvmf_vfio_user.sh@95 -- # killprocess 253762 00:12:52.420 10:58:48 -- common/autotest_common.sh@946 -- # '[' -z 253762 ']' 00:12:52.420 10:58:48 -- common/autotest_common.sh@950 -- # kill -0 253762 00:12:52.420 10:58:48 -- common/autotest_common.sh@951 -- # uname 00:12:52.420 10:58:48 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.420 10:58:48 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 253762 00:12:52.420 10:58:48 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:52.420 10:58:48 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:52.420 10:58:48 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 253762' 00:12:52.420 killing process with pid 253762 00:12:52.420 10:58:48 -- common/autotest_common.sh@965 -- # kill 253762 00:12:52.420 [2024-05-15 10:58:48.926694] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:52.420 10:58:48 -- common/autotest_common.sh@970 -- # wait 253762 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=263194 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 263194' 00:12:52.682 Process pid: 263194 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:52.682 10:58:49 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 263194 00:12:52.682 10:58:49 -- common/autotest_common.sh@827 -- # '[' -z 263194 ']' 00:12:52.682 10:58:49 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.682 10:58:49 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.682 10:58:49 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.682 10:58:49 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.682 10:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:52.682 [2024-05-15 10:58:49.150175] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:52.682 [2024-05-15 10:58:49.151118] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:12:52.682 [2024-05-15 10:58:49.151157] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.682 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.682 [2024-05-15 10:58:49.211074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.682 [2024-05-15 10:58:49.274257] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.682 [2024-05-15 10:58:49.274295] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.682 [2024-05-15 10:58:49.274303] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.682 [2024-05-15 10:58:49.274309] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.682 [2024-05-15 10:58:49.274315] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.682 [2024-05-15 10:58:49.274464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.682 [2024-05-15 10:58:49.274579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.682 [2024-05-15 10:58:49.274670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.682 [2024-05-15 10:58:49.274671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.943 [2024-05-15 10:58:49.343694] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:52.943 [2024-05-15 10:58:49.343702] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:52.943 [2024-05-15 10:58:49.344016] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:52.943 [2024-05-15 10:58:49.344182] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:52.943 [2024-05-15 10:58:49.344281] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:53.517 10:58:49 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.517 10:58:49 -- common/autotest_common.sh@860 -- # return 0 00:12:53.517 10:58:49 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:54.462 10:58:50 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:54.462 10:58:51 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:54.462 10:58:51 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:54.462 10:58:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.462 10:58:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:54.462 10:58:51 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.722 Malloc1 00:12:54.722 10:58:51 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:54.980 10:58:51 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:54.980 10:58:51 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:55.240 [2024-05-15 10:58:51.743089] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:55.240 10:58:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.240 10:58:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:55.240 10:58:51 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:55.502 Malloc2 00:12:55.502 10:58:51 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:55.502 10:58:52 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:55.763 10:58:52 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:56.024 10:58:52 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:56.024 10:58:52 -- target/nvmf_vfio_user.sh@95 -- # killprocess 263194 00:12:56.024 10:58:52 -- common/autotest_common.sh@946 -- # '[' -z 263194 ']' 00:12:56.024 10:58:52 -- common/autotest_common.sh@950 -- # kill -0 263194 00:12:56.024 10:58:52 -- common/autotest_common.sh@951 -- # uname 00:12:56.024 10:58:52 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:56.024 10:58:52 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 263194 00:12:56.024 10:58:52 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:56.024 10:58:52 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:56.024 10:58:52 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 263194' 00:12:56.024 killing process with pid 263194 00:12:56.024 10:58:52 -- common/autotest_common.sh@965 -- # kill 263194 00:12:56.024 [2024-05-15 10:58:52.488340] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:56.024 10:58:52 -- common/autotest_common.sh@970 -- # wait 263194 00:12:56.024 10:58:52 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:56.024 10:58:52 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:56.024 00:12:56.024 real 0m50.850s 00:12:56.024 user 3m21.568s 00:12:56.024 sys 0m3.020s 00:12:56.024 10:58:52 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:56.024 10:58:52 -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 ************************************ 00:12:56.024 END TEST nvmf_vfio_user 00:12:56.024 ************************************ 00:12:56.286 10:58:52 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:56.286 10:58:52 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:56.286 10:58:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.286 10:58:52 -- common/autotest_common.sh@10 -- # set +x 00:12:56.286 ************************************ 00:12:56.286 START TEST nvmf_vfio_user_nvme_compliance 00:12:56.286 ************************************ 00:12:56.286 10:58:52 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:56.286 * Looking for test storage... 00:12:56.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:56.286 10:58:52 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.286 10:58:52 -- nvmf/common.sh@7 -- # uname -s 00:12:56.286 10:58:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.286 10:58:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.286 10:58:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.286 10:58:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.286 10:58:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.286 10:58:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.286 10:58:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.286 10:58:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.286 10:58:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.286 10:58:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.286 10:58:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:56.286 10:58:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:56.286 10:58:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.286 10:58:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.286 10:58:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.286 10:58:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.286 10:58:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.286 10:58:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.286 10:58:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.286 10:58:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.286 10:58:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.286 10:58:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.287 10:58:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.287 10:58:52 -- paths/export.sh@5 -- # export PATH 00:12:56.287 10:58:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.287 10:58:52 -- nvmf/common.sh@47 -- # : 0 00:12:56.287 10:58:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.287 10:58:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.287 10:58:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.287 10:58:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.287 10:58:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.287 10:58:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.287 10:58:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.287 10:58:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.287 10:58:52 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.287 10:58:52 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.287 10:58:52 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:56.287 10:58:52 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:56.287 10:58:52 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:56.287 10:58:52 -- compliance/compliance.sh@20 -- # nvmfpid=263937 00:12:56.287 10:58:52 -- compliance/compliance.sh@21 -- # echo 'Process pid: 263937' 00:12:56.287 Process pid: 263937 00:12:56.287 10:58:52 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:56.287 10:58:52 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:56.287 10:58:52 -- compliance/compliance.sh@24 -- # waitforlisten 263937 00:12:56.287 10:58:52 -- common/autotest_common.sh@827 -- # '[' -z 263937 ']' 00:12:56.287 10:58:52 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.287 10:58:52 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.287 10:58:52 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.287 10:58:52 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.287 10:58:52 -- common/autotest_common.sh@10 -- # set +x 00:12:56.287 [2024-05-15 10:58:52.916427] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:12:56.287 [2024-05-15 10:58:52.916502] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.549 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.549 [2024-05-15 10:58:52.981764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.549 [2024-05-15 10:58:53.055434] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.549 [2024-05-15 10:58:53.055473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.549 [2024-05-15 10:58:53.055480] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.549 [2024-05-15 10:58:53.055487] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.549 [2024-05-15 10:58:53.055493] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.549 [2024-05-15 10:58:53.055580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.549 [2024-05-15 10:58:53.055647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.549 [2024-05-15 10:58:53.055651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.123 10:58:53 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.123 10:58:53 -- common/autotest_common.sh@860 -- # return 0 00:12:57.123 10:58:53 -- compliance/compliance.sh@26 -- # sleep 1 00:12:58.066 10:58:54 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:58.066 10:58:54 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:58.066 10:58:54 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:58.066 10:58:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.066 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:58.066 10:58:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.066 10:58:54 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:58.066 10:58:54 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:58.066 10:58:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.066 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:58.326 malloc0 00:12:58.326 10:58:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.326 10:58:54 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:58.326 10:58:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.326 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:58.326 10:58:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.326 10:58:54 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:58.326 10:58:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.326 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:58.326 10:58:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.326 10:58:54 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:58.326 10:58:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.326 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:58.326 [2024-05-15 10:58:54.754202] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:58.326 10:58:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.326 10:58:54 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:58.326 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.326 00:12:58.326 00:12:58.326 CUnit - A unit testing framework for C - Version 2.1-3 00:12:58.326 http://cunit.sourceforge.net/ 00:12:58.326 00:12:58.326 00:12:58.326 Suite: nvme_compliance 00:12:58.326 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 10:58:54.926061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.326 [2024-05-15 10:58:54.927370] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:58.327 [2024-05-15 10:58:54.927381] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:58.327 [2024-05-15 10:58:54.927385] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:58.327 [2024-05-15 10:58:54.929080] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.327 passed 00:12:58.588 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 10:58:55.023671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.588 [2024-05-15 10:58:55.026684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.588 passed 00:12:58.588 Test: admin_identify_ns ...[2024-05-15 10:58:55.123908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.588 [2024-05-15 10:58:55.184560] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:58.588 [2024-05-15 10:58:55.192559] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:58.588 [2024-05-15 10:58:55.213674] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.849 passed 00:12:58.850 Test: admin_get_features_mandatory_features ...[2024-05-15 10:58:55.306359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.850 [2024-05-15 10:58:55.309373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.850 passed 00:12:58.850 Test: admin_get_features_optional_features ...[2024-05-15 10:58:55.403919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.850 [2024-05-15 10:58:55.406935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.850 passed 00:12:58.850 Test: admin_set_features_number_of_queues ...[2024-05-15 10:58:55.498797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.111 [2024-05-15 10:58:55.603662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.111 passed 00:12:59.111 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 10:58:55.697682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.111 [2024-05-15 10:58:55.700701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.111 passed 00:12:59.372 Test: admin_get_log_page_with_lpo ...[2024-05-15 10:58:55.793817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.372 [2024-05-15 10:58:55.865556] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:59.372 [2024-05-15 10:58:55.878628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.372 passed 00:12:59.372 Test: fabric_property_get ...[2024-05-15 10:58:55.970247] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.372 [2024-05-15 10:58:55.971475] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:59.372 [2024-05-15 10:58:55.973259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.372 passed 00:12:59.632 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 10:58:56.065798] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.632 [2024-05-15 10:58:56.067133] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:59.632 [2024-05-15 10:58:56.068892] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.632 passed 00:12:59.632 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 10:58:56.163025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.632 [2024-05-15 10:58:56.246555] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.632 [2024-05-15 10:58:56.262554] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.633 [2024-05-15 10:58:56.267637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.893 passed 00:12:59.893 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 10:58:56.361669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.893 [2024-05-15 10:58:56.362895] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:59.893 [2024-05-15 10:58:56.364694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.893 passed 00:12:59.893 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 10:58:56.456799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.893 [2024-05-15 10:58:56.533564] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:00.153 [2024-05-15 10:58:56.557553] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:00.153 [2024-05-15 10:58:56.562634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.153 passed 00:13:00.153 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 10:58:56.655267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.153 [2024-05-15 10:58:56.656494] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:00.153 [2024-05-15 10:58:56.656515] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:00.153 [2024-05-15 10:58:56.658297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.153 passed 00:13:00.153 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 10:58:56.749801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.413 [2024-05-15 10:58:56.841555] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:00.414 [2024-05-15 10:58:56.849557] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:00.414 [2024-05-15 10:58:56.857555] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:00.414 [2024-05-15 10:58:56.865553] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:00.414 [2024-05-15 10:58:56.894644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.414 passed 00:13:00.414 Test: admin_create_io_sq_verify_pc ...[2024-05-15 10:58:56.988665] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.414 [2024-05-15 10:58:57.004561] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:00.414 [2024-05-15 10:58:57.024804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.414 passed 00:13:00.674 Test: admin_create_io_qp_max_qps ...[2024-05-15 10:58:57.116340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.616 [2024-05-15 10:58:58.216556] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:02.187 [2024-05-15 10:58:58.603658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:02.187 passed 00:13:02.187 Test: admin_create_io_sq_shared_cq ...[2024-05-15 10:58:58.695796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:02.188 [2024-05-15 10:58:58.827563] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:02.449 [2024-05-15 10:58:58.864614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:02.449 passed 00:13:02.449 00:13:02.449 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.449 suites 1 1 n/a 0 0 00:13:02.449 tests 18 18 18 0 0 00:13:02.449 asserts 360 360 360 0 n/a 00:13:02.449 00:13:02.449 Elapsed time = 1.650 seconds 00:13:02.449 10:58:58 -- compliance/compliance.sh@42 -- # killprocess 263937 00:13:02.449 10:58:58 -- common/autotest_common.sh@946 -- # '[' -z 263937 ']' 00:13:02.449 10:58:58 -- common/autotest_common.sh@950 -- # kill -0 263937 00:13:02.449 10:58:58 -- common/autotest_common.sh@951 -- # uname 00:13:02.449 10:58:58 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:02.449 10:58:58 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 263937 00:13:02.449 10:58:58 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:02.449 10:58:58 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:02.449 10:58:58 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 263937' 00:13:02.449 killing process with pid 263937 00:13:02.449 10:58:58 -- common/autotest_common.sh@965 -- # kill 263937 00:13:02.449 [2024-05-15 10:58:58.972555] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:02.449 10:58:58 -- common/autotest_common.sh@970 -- # wait 263937 00:13:02.709 10:58:59 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:02.709 10:58:59 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:02.709 00:13:02.709 real 0m6.391s 00:13:02.709 user 0m18.257s 00:13:02.709 sys 0m0.470s 00:13:02.709 10:58:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.709 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:13:02.709 ************************************ 00:13:02.709 END TEST nvmf_vfio_user_nvme_compliance 00:13:02.709 ************************************ 00:13:02.709 10:58:59 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:02.709 10:58:59 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:02.709 10:58:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.709 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:13:02.709 ************************************ 00:13:02.709 START TEST nvmf_vfio_user_fuzz 00:13:02.709 ************************************ 00:13:02.710 10:58:59 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:02.710 * Looking for test storage... 00:13:02.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.710 10:58:59 -- nvmf/common.sh@7 -- # uname -s 00:13:02.710 10:58:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.710 10:58:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.710 10:58:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.710 10:58:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.710 10:58:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.710 10:58:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.710 10:58:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.710 10:58:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.710 10:58:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.710 10:58:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.710 10:58:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.710 10:58:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.710 10:58:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.710 10:58:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.710 10:58:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.710 10:58:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.710 10:58:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.710 10:58:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.710 10:58:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.710 10:58:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.710 10:58:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.710 10:58:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.710 10:58:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.710 10:58:59 -- paths/export.sh@5 -- # export PATH 00:13:02.710 10:58:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.710 10:58:59 -- nvmf/common.sh@47 -- # : 0 00:13:02.710 10:58:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.710 10:58:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.710 10:58:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.710 10:58:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.710 10:58:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.710 10:58:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.710 10:58:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.710 10:58:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=265337 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 265337' 00:13:02.710 Process pid: 265337 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:02.710 10:58:59 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 265337 00:13:02.710 10:58:59 -- common/autotest_common.sh@827 -- # '[' -z 265337 ']' 00:13:02.710 10:58:59 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.710 10:58:59 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.710 10:58:59 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.710 10:58:59 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.710 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:13:03.652 10:59:00 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.652 10:59:00 -- common/autotest_common.sh@860 -- # return 0 00:13:03.652 10:59:00 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.595 10:59:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.595 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:04.595 10:59:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.595 10:59:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.595 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:04.595 malloc0 00:13:04.595 10:59:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:04.595 10:59:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.595 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:04.595 10:59:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.595 10:59:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.595 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:04.595 10:59:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.595 10:59:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.595 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:04.595 10:59:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:04.595 10:59:01 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:36.704 Fuzzing completed. Shutting down the fuzz application 00:13:36.704 00:13:36.704 Dumping successful admin opcodes: 00:13:36.704 8, 9, 10, 24, 00:13:36.704 Dumping successful io opcodes: 00:13:36.704 0, 00:13:36.704 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1091782, total successful commands: 4300, random_seed: 282589120 00:13:36.704 NS: 0x200003a1ef00 admin qp, Total commands completed: 137376, total successful commands: 1113, random_seed: 1000818112 00:13:36.704 10:59:32 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:36.704 10:59:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.704 10:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:36.704 10:59:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.704 10:59:32 -- target/vfio_user_fuzz.sh@46 -- # killprocess 265337 00:13:36.704 10:59:32 -- common/autotest_common.sh@946 -- # '[' -z 265337 ']' 00:13:36.704 10:59:32 -- common/autotest_common.sh@950 -- # kill -0 265337 00:13:36.704 10:59:32 -- common/autotest_common.sh@951 -- # uname 00:13:36.704 10:59:32 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:36.704 10:59:32 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 265337 00:13:36.704 10:59:32 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:36.704 10:59:32 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:36.704 10:59:32 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 265337' 00:13:36.704 killing process with pid 265337 00:13:36.704 10:59:32 -- common/autotest_common.sh@965 -- # kill 265337 00:13:36.704 10:59:32 -- common/autotest_common.sh@970 -- # wait 265337 00:13:36.704 10:59:32 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:36.704 10:59:32 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:36.704 00:13:36.704 real 0m33.654s 00:13:36.704 user 0m38.210s 00:13:36.704 sys 0m23.990s 00:13:36.704 10:59:32 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.704 10:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:36.704 ************************************ 00:13:36.704 END TEST nvmf_vfio_user_fuzz 00:13:36.704 ************************************ 00:13:36.704 10:59:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.704 10:59:32 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:36.704 10:59:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.704 10:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:36.704 ************************************ 00:13:36.704 START TEST nvmf_host_management 00:13:36.704 ************************************ 00:13:36.704 10:59:32 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.704 * Looking for test storage... 00:13:36.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.704 10:59:33 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.704 10:59:33 -- nvmf/common.sh@7 -- # uname -s 00:13:36.704 10:59:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.704 10:59:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.704 10:59:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.704 10:59:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.704 10:59:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.704 10:59:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.704 10:59:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.704 10:59:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.704 10:59:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.704 10:59:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.704 10:59:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.704 10:59:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.704 10:59:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.704 10:59:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.704 10:59:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.704 10:59:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.704 10:59:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.704 10:59:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.704 10:59:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.704 10:59:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.705 10:59:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.705 10:59:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.705 10:59:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.705 10:59:33 -- paths/export.sh@5 -- # export PATH 00:13:36.705 10:59:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.705 10:59:33 -- nvmf/common.sh@47 -- # : 0 00:13:36.705 10:59:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.705 10:59:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.705 10:59:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.705 10:59:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.705 10:59:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.705 10:59:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.705 10:59:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.705 10:59:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.705 10:59:33 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.705 10:59:33 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.705 10:59:33 -- target/host_management.sh@105 -- # nvmftestinit 00:13:36.705 10:59:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:36.705 10:59:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.705 10:59:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:36.705 10:59:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:36.705 10:59:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:36.705 10:59:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.705 10:59:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.705 10:59:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.705 10:59:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:36.705 10:59:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:36.705 10:59:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.705 10:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:43.285 10:59:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:43.285 10:59:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.285 10:59:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.285 10:59:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.285 10:59:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.285 10:59:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.285 10:59:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.285 10:59:39 -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.285 10:59:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.285 10:59:39 -- nvmf/common.sh@296 -- # e810=() 00:13:43.285 10:59:39 -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.285 10:59:39 -- nvmf/common.sh@297 -- # x722=() 00:13:43.285 10:59:39 -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.285 10:59:39 -- nvmf/common.sh@298 -- # mlx=() 00:13:43.285 10:59:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.285 10:59:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.285 10:59:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.285 10:59:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.285 10:59:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.285 10:59:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.285 10:59:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:43.285 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:43.285 10:59:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.285 10:59:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:43.285 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:43.285 10:59:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.285 10:59:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.285 10:59:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.285 10:59:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.285 10:59:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.285 10:59:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.285 10:59:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:43.285 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:43.285 10:59:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.285 10:59:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.285 10:59:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.285 10:59:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.285 10:59:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.285 10:59:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:43.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:43.286 10:59:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.286 10:59:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:43.286 10:59:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:43.286 10:59:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:43.286 10:59:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:43.286 10:59:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:43.286 10:59:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.286 10:59:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.286 10:59:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.286 10:59:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.286 10:59:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.286 10:59:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.286 10:59:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.286 10:59:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.286 10:59:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.286 10:59:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.286 10:59:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.286 10:59:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.286 10:59:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.546 10:59:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.546 10:59:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.546 10:59:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.546 10:59:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.546 10:59:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.546 10:59:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.546 10:59:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:13:43.546 00:13:43.546 --- 10.0.0.2 ping statistics --- 00:13:43.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.546 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:13:43.546 10:59:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:13:43.546 00:13:43.546 --- 10.0.0.1 ping statistics --- 00:13:43.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.546 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:13:43.546 10:59:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.546 10:59:40 -- nvmf/common.sh@411 -- # return 0 00:13:43.546 10:59:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:43.546 10:59:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.546 10:59:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:43.546 10:59:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:43.546 10:59:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.546 10:59:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:43.546 10:59:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:43.546 10:59:40 -- target/host_management.sh@107 -- # nvmf_host_management 00:13:43.546 10:59:40 -- target/host_management.sh@69 -- # starttarget 00:13:43.546 10:59:40 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:43.546 10:59:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:43.546 10:59:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:43.546 10:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:43.546 10:59:40 -- nvmf/common.sh@470 -- # nvmfpid=275642 00:13:43.546 10:59:40 -- nvmf/common.sh@471 -- # waitforlisten 275642 00:13:43.546 10:59:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:43.546 10:59:40 -- common/autotest_common.sh@827 -- # '[' -z 275642 ']' 00:13:43.546 10:59:40 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.546 10:59:40 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:43.546 10:59:40 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.546 10:59:40 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:43.547 10:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:43.547 [2024-05-15 10:59:40.191419] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:13:43.547 [2024-05-15 10:59:40.191477] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.807 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.807 [2024-05-15 10:59:40.274922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.807 [2024-05-15 10:59:40.370106] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.807 [2024-05-15 10:59:40.370162] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.807 [2024-05-15 10:59:40.370171] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.807 [2024-05-15 10:59:40.370178] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.807 [2024-05-15 10:59:40.370184] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.807 [2024-05-15 10:59:40.370311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.807 [2024-05-15 10:59:40.370478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.807 [2024-05-15 10:59:40.370641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.807 [2024-05-15 10:59:40.370641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:44.378 10:59:40 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:44.378 10:59:40 -- common/autotest_common.sh@860 -- # return 0 00:13:44.378 10:59:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:44.378 10:59:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.378 10:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:44.378 10:59:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.378 10:59:41 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.378 10:59:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.378 10:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.378 [2024-05-15 10:59:41.018029] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.378 10:59:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.378 10:59:41 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:44.378 10:59:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:44.378 10:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.638 10:59:41 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:44.638 10:59:41 -- target/host_management.sh@23 -- # cat 00:13:44.638 10:59:41 -- target/host_management.sh@30 -- # rpc_cmd 00:13:44.638 10:59:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.638 10:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.638 Malloc0 00:13:44.638 [2024-05-15 10:59:41.077212] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:44.638 [2024-05-15 10:59:41.077455] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.638 10:59:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.638 10:59:41 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:44.638 10:59:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.638 10:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.638 10:59:41 -- target/host_management.sh@73 -- # perfpid=275803 00:13:44.638 10:59:41 -- target/host_management.sh@74 -- # waitforlisten 275803 /var/tmp/bdevperf.sock 00:13:44.638 10:59:41 -- common/autotest_common.sh@827 -- # '[' -z 275803 ']' 00:13:44.638 10:59:41 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.638 10:59:41 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:44.639 10:59:41 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.639 10:59:41 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:44.639 10:59:41 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:44.639 10:59:41 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:44.639 10:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.639 10:59:41 -- nvmf/common.sh@521 -- # config=() 00:13:44.639 10:59:41 -- nvmf/common.sh@521 -- # local subsystem config 00:13:44.639 10:59:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:44.639 10:59:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:44.639 { 00:13:44.639 "params": { 00:13:44.639 "name": "Nvme$subsystem", 00:13:44.639 "trtype": "$TEST_TRANSPORT", 00:13:44.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.639 "adrfam": "ipv4", 00:13:44.639 "trsvcid": "$NVMF_PORT", 00:13:44.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.639 "hdgst": ${hdgst:-false}, 00:13:44.639 "ddgst": ${ddgst:-false} 00:13:44.639 }, 00:13:44.639 "method": "bdev_nvme_attach_controller" 00:13:44.639 } 00:13:44.639 EOF 00:13:44.639 )") 00:13:44.639 10:59:41 -- nvmf/common.sh@543 -- # cat 00:13:44.639 10:59:41 -- nvmf/common.sh@545 -- # jq . 00:13:44.639 10:59:41 -- nvmf/common.sh@546 -- # IFS=, 00:13:44.639 10:59:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:44.639 "params": { 00:13:44.639 "name": "Nvme0", 00:13:44.639 "trtype": "tcp", 00:13:44.639 "traddr": "10.0.0.2", 00:13:44.639 "adrfam": "ipv4", 00:13:44.639 "trsvcid": "4420", 00:13:44.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:44.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:44.639 "hdgst": false, 00:13:44.639 "ddgst": false 00:13:44.639 }, 00:13:44.639 "method": "bdev_nvme_attach_controller" 00:13:44.639 }' 00:13:44.639 [2024-05-15 10:59:41.175260] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:13:44.639 [2024-05-15 10:59:41.175308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275803 ] 00:13:44.639 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.639 [2024-05-15 10:59:41.233428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.899 [2024-05-15 10:59:41.297804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.159 Running I/O for 10 seconds... 00:13:45.423 10:59:41 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:45.423 10:59:41 -- common/autotest_common.sh@860 -- # return 0 00:13:45.423 10:59:41 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:45.423 10:59:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.423 10:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:45.423 10:59:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.423 10:59:41 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.423 10:59:41 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:45.423 10:59:41 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:45.423 10:59:41 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:45.423 10:59:41 -- target/host_management.sh@52 -- # local ret=1 00:13:45.423 10:59:41 -- target/host_management.sh@53 -- # local i 00:13:45.423 10:59:41 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:45.423 10:59:41 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:45.423 10:59:41 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:45.423 10:59:41 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:45.423 10:59:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.423 10:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:45.423 10:59:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.423 10:59:42 -- target/host_management.sh@55 -- # read_io_count=515 00:13:45.423 10:59:42 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:45.423 10:59:42 -- target/host_management.sh@59 -- # ret=0 00:13:45.423 10:59:42 -- target/host_management.sh@60 -- # break 00:13:45.423 10:59:42 -- target/host_management.sh@64 -- # return 0 00:13:45.423 10:59:42 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:45.423 10:59:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.423 10:59:42 -- common/autotest_common.sh@10 -- # set +x 00:13:45.423 [2024-05-15 10:59:42.020517] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020564] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020572] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020584] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020591] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020597] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020603] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020610] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020617] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020623] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020630] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020636] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020642] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020649] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020655] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020661] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020668] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020675] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020682] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020688] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020695] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020701] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020707] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020714] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020720] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020726] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020733] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020740] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020746] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020753] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020760] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020767] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020773] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020780] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020786] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020792] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020799] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020805] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020811] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020818] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020825] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020831] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020838] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020844] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020850] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020856] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020863] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020869] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020876] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020882] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020888] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020895] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020901] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020907] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020914] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020920] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020926] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020933] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.423 [2024-05-15 10:59:42.020941] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.424 [2024-05-15 10:59:42.020947] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.424 [2024-05-15 10:59:42.020953] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.424 [2024-05-15 10:59:42.020961] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.424 [2024-05-15 10:59:42.020967] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810bc0 is same with the state(5) to be set 00:13:45.424 [2024-05-15 10:59:42.021177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.424 [2024-05-15 10:59:42.021863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.424 [2024-05-15 10:59:42.021870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.021880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.021888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.021898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.021905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.021915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.021923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.021933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.021940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.021950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.021958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.021967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.021975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.021985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.021993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.425 [2024-05-15 10:59:42.022328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.022337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fc850 is same with the state(5) to be set 00:13:45.425 [2024-05-15 10:59:42.022380] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13fc850 was disconnected and freed. reset controller. 00:13:45.425 [2024-05-15 10:59:42.023602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:45.425 task offset: 73728 on job bdev=Nvme0n1 fails 00:13:45.425 00:13:45.425 Latency(us) 00:13:45.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.425 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:45.425 Job: Nvme0n1 ended in about 0.46 seconds with error 00:13:45.425 Verification LBA range: start 0x0 length 0x400 00:13:45.425 Nvme0n1 : 0.46 1257.92 78.62 139.77 0.00 44536.34 5352.11 36918.61 00:13:45.425 =================================================================================================================== 00:13:45.425 Total : 1257.92 78.62 139.77 0.00 44536.34 5352.11 36918.61 00:13:45.425 10:59:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.425 [2024-05-15 10:59:42.025610] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:45.425 [2024-05-15 10:59:42.025636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc3840 (9): Bad file descriptor 00:13:45.425 10:59:42 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:45.425 10:59:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.425 10:59:42 -- common/autotest_common.sh@10 -- # set +x 00:13:45.425 [2024-05-15 10:59:42.030123] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:45.425 [2024-05-15 10:59:42.030201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:45.425 [2024-05-15 10:59:42.030219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.425 [2024-05-15 10:59:42.030232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:45.425 [2024-05-15 10:59:42.030240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:45.425 [2024-05-15 10:59:42.030247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:45.425 [2024-05-15 10:59:42.030254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfc3840 00:13:45.425 [2024-05-15 10:59:42.030275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc3840 (9): Bad file descriptor 00:13:45.425 [2024-05-15 10:59:42.030286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:45.425 [2024-05-15 10:59:42.030293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:45.425 [2024-05-15 10:59:42.030301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:45.425 [2024-05-15 10:59:42.030313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:45.425 10:59:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.425 10:59:42 -- target/host_management.sh@87 -- # sleep 1 00:13:46.809 10:59:43 -- target/host_management.sh@91 -- # kill -9 275803 00:13:46.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (275803) - No such process 00:13:46.809 10:59:43 -- target/host_management.sh@91 -- # true 00:13:46.809 10:59:43 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:46.809 10:59:43 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:46.809 10:59:43 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:46.809 10:59:43 -- nvmf/common.sh@521 -- # config=() 00:13:46.809 10:59:43 -- nvmf/common.sh@521 -- # local subsystem config 00:13:46.809 10:59:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:46.809 10:59:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:46.809 { 00:13:46.809 "params": { 00:13:46.809 "name": "Nvme$subsystem", 00:13:46.809 "trtype": "$TEST_TRANSPORT", 00:13:46.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.809 "adrfam": "ipv4", 00:13:46.809 "trsvcid": "$NVMF_PORT", 00:13:46.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.809 "hdgst": ${hdgst:-false}, 00:13:46.809 "ddgst": ${ddgst:-false} 00:13:46.809 }, 00:13:46.809 "method": "bdev_nvme_attach_controller" 00:13:46.809 } 00:13:46.809 EOF 00:13:46.809 )") 00:13:46.809 10:59:43 -- nvmf/common.sh@543 -- # cat 00:13:46.809 10:59:43 -- nvmf/common.sh@545 -- # jq . 00:13:46.809 10:59:43 -- nvmf/common.sh@546 -- # IFS=, 00:13:46.809 10:59:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:46.809 "params": { 00:13:46.809 "name": "Nvme0", 00:13:46.809 "trtype": "tcp", 00:13:46.809 "traddr": "10.0.0.2", 00:13:46.809 "adrfam": "ipv4", 00:13:46.809 "trsvcid": "4420", 00:13:46.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:46.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:46.809 "hdgst": false, 00:13:46.809 "ddgst": false 00:13:46.809 }, 00:13:46.809 "method": "bdev_nvme_attach_controller" 00:13:46.809 }' 00:13:46.809 [2024-05-15 10:59:43.103652] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:13:46.809 [2024-05-15 10:59:43.103743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276249 ] 00:13:46.809 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.809 [2024-05-15 10:59:43.163268] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.809 [2024-05-15 10:59:43.226363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.069 Running I/O for 1 seconds... 00:13:48.010 00:13:48.010 Latency(us) 00:13:48.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.010 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:48.010 Verification LBA range: start 0x0 length 0x400 00:13:48.010 Nvme0n1 : 1.02 1438.19 89.89 0.00 0.00 43774.22 8628.91 34515.63 00:13:48.010 =================================================================================================================== 00:13:48.010 Total : 1438.19 89.89 0.00 0.00 43774.22 8628.91 34515.63 00:13:48.271 10:59:44 -- target/host_management.sh@102 -- # stoptarget 00:13:48.271 10:59:44 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:48.271 10:59:44 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:48.272 10:59:44 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:48.272 10:59:44 -- target/host_management.sh@40 -- # nvmftestfini 00:13:48.272 10:59:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:48.272 10:59:44 -- nvmf/common.sh@117 -- # sync 00:13:48.272 10:59:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.272 10:59:44 -- nvmf/common.sh@120 -- # set +e 00:13:48.272 10:59:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.272 10:59:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.272 rmmod nvme_tcp 00:13:48.272 rmmod nvme_fabrics 00:13:48.272 rmmod nvme_keyring 00:13:48.272 10:59:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.272 10:59:44 -- nvmf/common.sh@124 -- # set -e 00:13:48.272 10:59:44 -- nvmf/common.sh@125 -- # return 0 00:13:48.272 10:59:44 -- nvmf/common.sh@478 -- # '[' -n 275642 ']' 00:13:48.272 10:59:44 -- nvmf/common.sh@479 -- # killprocess 275642 00:13:48.272 10:59:44 -- common/autotest_common.sh@946 -- # '[' -z 275642 ']' 00:13:48.272 10:59:44 -- common/autotest_common.sh@950 -- # kill -0 275642 00:13:48.272 10:59:44 -- common/autotest_common.sh@951 -- # uname 00:13:48.272 10:59:44 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:48.272 10:59:44 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 275642 00:13:48.272 10:59:44 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:48.272 10:59:44 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:48.272 10:59:44 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 275642' 00:13:48.272 killing process with pid 275642 00:13:48.272 10:59:44 -- common/autotest_common.sh@965 -- # kill 275642 00:13:48.272 [2024-05-15 10:59:44.831341] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:48.272 10:59:44 -- common/autotest_common.sh@970 -- # wait 275642 00:13:48.533 [2024-05-15 10:59:44.935820] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:48.533 10:59:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:48.533 10:59:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:48.533 10:59:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:48.533 10:59:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.533 10:59:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.533 10:59:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.533 10:59:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.534 10:59:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.450 10:59:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.450 10:59:47 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:50.450 00:13:50.450 real 0m14.099s 00:13:50.450 user 0m22.983s 00:13:50.450 sys 0m6.201s 00:13:50.450 10:59:47 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:50.450 10:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:50.450 ************************************ 00:13:50.450 END TEST nvmf_host_management 00:13:50.450 ************************************ 00:13:50.450 10:59:47 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.450 10:59:47 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:50.450 10:59:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:50.450 10:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:50.712 ************************************ 00:13:50.712 START TEST nvmf_lvol 00:13:50.712 ************************************ 00:13:50.712 10:59:47 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.712 * Looking for test storage... 00:13:50.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.712 10:59:47 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.712 10:59:47 -- nvmf/common.sh@7 -- # uname -s 00:13:50.712 10:59:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.712 10:59:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.712 10:59:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.712 10:59:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.712 10:59:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.712 10:59:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.712 10:59:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.712 10:59:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.712 10:59:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.712 10:59:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.712 10:59:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:50.712 10:59:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:50.712 10:59:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.712 10:59:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.712 10:59:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.712 10:59:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.712 10:59:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.712 10:59:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.712 10:59:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.712 10:59:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.712 10:59:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.712 10:59:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.712 10:59:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.712 10:59:47 -- paths/export.sh@5 -- # export PATH 00:13:50.712 10:59:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.712 10:59:47 -- nvmf/common.sh@47 -- # : 0 00:13:50.712 10:59:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.712 10:59:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.712 10:59:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.712 10:59:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.712 10:59:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.712 10:59:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.712 10:59:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.712 10:59:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.712 10:59:47 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.712 10:59:47 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.712 10:59:47 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:50.712 10:59:47 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:50.712 10:59:47 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.713 10:59:47 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:50.713 10:59:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:50.713 10:59:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.713 10:59:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:50.713 10:59:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:50.713 10:59:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:50.713 10:59:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.713 10:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.713 10:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.713 10:59:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:50.713 10:59:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:50.713 10:59:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.713 10:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:57.307 10:59:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:57.307 10:59:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.307 10:59:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.307 10:59:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.307 10:59:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.307 10:59:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.307 10:59:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.307 10:59:53 -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.307 10:59:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.307 10:59:53 -- nvmf/common.sh@296 -- # e810=() 00:13:57.307 10:59:53 -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.307 10:59:53 -- nvmf/common.sh@297 -- # x722=() 00:13:57.307 10:59:53 -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.307 10:59:53 -- nvmf/common.sh@298 -- # mlx=() 00:13:57.307 10:59:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.307 10:59:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.307 10:59:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.307 10:59:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.307 10:59:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.307 10:59:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.307 10:59:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:57.307 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:57.307 10:59:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.307 10:59:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:57.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:57.307 10:59:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.307 10:59:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.307 10:59:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.307 10:59:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:57.307 10:59:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.307 10:59:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:57.307 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:57.307 10:59:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.307 10:59:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.307 10:59:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.307 10:59:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:57.307 10:59:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.307 10:59:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:57.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:57.307 10:59:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.307 10:59:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:57.307 10:59:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:57.307 10:59:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:57.307 10:59:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:57.307 10:59:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.307 10:59:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.307 10:59:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.307 10:59:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.307 10:59:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.307 10:59:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.307 10:59:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.307 10:59:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.307 10:59:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.307 10:59:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.307 10:59:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.307 10:59:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.307 10:59:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.568 10:59:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.568 10:59:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.568 10:59:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.568 10:59:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.569 10:59:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.569 10:59:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.569 10:59:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:13:57.569 00:13:57.569 --- 10.0.0.2 ping statistics --- 00:13:57.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.569 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:13:57.569 10:59:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:13:57.569 00:13:57.569 --- 10.0.0.1 ping statistics --- 00:13:57.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.569 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:57.569 10:59:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.569 10:59:54 -- nvmf/common.sh@411 -- # return 0 00:13:57.569 10:59:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:57.569 10:59:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.569 10:59:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:57.569 10:59:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:57.569 10:59:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.569 10:59:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:57.569 10:59:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:57.830 10:59:54 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:57.830 10:59:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:57.830 10:59:54 -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:57.830 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:57.830 10:59:54 -- nvmf/common.sh@470 -- # nvmfpid=280696 00:13:57.830 10:59:54 -- nvmf/common.sh@471 -- # waitforlisten 280696 00:13:57.830 10:59:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:57.830 10:59:54 -- common/autotest_common.sh@827 -- # '[' -z 280696 ']' 00:13:57.830 10:59:54 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.830 10:59:54 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:57.830 10:59:54 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.830 10:59:54 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:57.830 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:57.830 [2024-05-15 10:59:54.310821] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:13:57.830 [2024-05-15 10:59:54.310869] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.830 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.830 [2024-05-15 10:59:54.375082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.830 [2024-05-15 10:59:54.438856] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.830 [2024-05-15 10:59:54.438891] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.830 [2024-05-15 10:59:54.438899] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.830 [2024-05-15 10:59:54.438906] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.830 [2024-05-15 10:59:54.438911] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.830 [2024-05-15 10:59:54.439050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.830 [2024-05-15 10:59:54.439164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.830 [2024-05-15 10:59:54.439167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.774 10:59:55 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:58.774 10:59:55 -- common/autotest_common.sh@860 -- # return 0 00:13:58.774 10:59:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:58.774 10:59:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.774 10:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:58.774 10:59:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.774 10:59:55 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:58.774 [2024-05-15 10:59:55.259376] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.774 10:59:55 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.035 10:59:55 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:59.035 10:59:55 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.035 10:59:55 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:59.035 10:59:55 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:59.296 10:59:55 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:59.557 10:59:56 -- target/nvmf_lvol.sh@29 -- # lvs=3a280e3d-d9d2-43d4-b0b9-3e68c6669e6c 00:13:59.557 10:59:56 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a280e3d-d9d2-43d4-b0b9-3e68c6669e6c lvol 20 00:13:59.557 10:59:56 -- target/nvmf_lvol.sh@32 -- # lvol=b0253cad-d39f-4683-8683-f0e683f25500 00:13:59.557 10:59:56 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:59.817 10:59:56 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b0253cad-d39f-4683-8683-f0e683f25500 00:14:00.078 10:59:56 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:00.078 [2024-05-15 10:59:56.645592] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:00.078 [2024-05-15 10:59:56.645816] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.078 10:59:56 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:00.339 10:59:56 -- target/nvmf_lvol.sh@42 -- # perf_pid=281261 00:14:00.339 10:59:56 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:00.339 10:59:56 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:00.339 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.283 10:59:57 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b0253cad-d39f-4683-8683-f0e683f25500 MY_SNAPSHOT 00:14:01.542 10:59:58 -- target/nvmf_lvol.sh@47 -- # snapshot=c860d2fe-107d-4632-b2dd-440510c952d0 00:14:01.542 10:59:58 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b0253cad-d39f-4683-8683-f0e683f25500 30 00:14:01.802 10:59:58 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c860d2fe-107d-4632-b2dd-440510c952d0 MY_CLONE 00:14:01.802 10:59:58 -- target/nvmf_lvol.sh@49 -- # clone=9552e26d-4a1d-4a7a-b47b-569f24576276 00:14:01.802 10:59:58 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9552e26d-4a1d-4a7a-b47b-569f24576276 00:14:02.373 10:59:58 -- target/nvmf_lvol.sh@53 -- # wait 281261 00:14:12.374 Initializing NVMe Controllers 00:14:12.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:12.374 Controller IO queue size 128, less than required. 00:14:12.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:12.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:12.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:12.374 Initialization complete. Launching workers. 00:14:12.374 ======================================================== 00:14:12.374 Latency(us) 00:14:12.374 Device Information : IOPS MiB/s Average min max 00:14:12.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12922.30 50.48 9909.48 1625.66 58011.57 00:14:12.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17812.80 69.58 7185.70 746.00 53032.71 00:14:12.374 ======================================================== 00:14:12.375 Total : 30735.10 120.06 8330.89 746.00 58011.57 00:14:12.375 00:14:12.375 11:00:07 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:12.375 11:00:07 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b0253cad-d39f-4683-8683-f0e683f25500 00:14:12.375 11:00:07 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a280e3d-d9d2-43d4-b0b9-3e68c6669e6c 00:14:12.375 11:00:07 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:12.375 11:00:07 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:12.375 11:00:07 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:12.375 11:00:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:12.375 11:00:07 -- nvmf/common.sh@117 -- # sync 00:14:12.375 11:00:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.375 11:00:07 -- nvmf/common.sh@120 -- # set +e 00:14:12.375 11:00:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.375 11:00:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.375 rmmod nvme_tcp 00:14:12.375 rmmod nvme_fabrics 00:14:12.375 rmmod nvme_keyring 00:14:12.375 11:00:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.375 11:00:07 -- nvmf/common.sh@124 -- # set -e 00:14:12.375 11:00:07 -- nvmf/common.sh@125 -- # return 0 00:14:12.375 11:00:07 -- nvmf/common.sh@478 -- # '[' -n 280696 ']' 00:14:12.375 11:00:07 -- nvmf/common.sh@479 -- # killprocess 280696 00:14:12.375 11:00:07 -- common/autotest_common.sh@946 -- # '[' -z 280696 ']' 00:14:12.375 11:00:07 -- common/autotest_common.sh@950 -- # kill -0 280696 00:14:12.375 11:00:07 -- common/autotest_common.sh@951 -- # uname 00:14:12.375 11:00:07 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:12.375 11:00:07 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 280696 00:14:12.375 11:00:07 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:12.375 11:00:07 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:12.375 11:00:07 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 280696' 00:14:12.375 killing process with pid 280696 00:14:12.375 11:00:07 -- common/autotest_common.sh@965 -- # kill 280696 00:14:12.375 [2024-05-15 11:00:07.842388] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:12.375 11:00:07 -- common/autotest_common.sh@970 -- # wait 280696 00:14:12.375 11:00:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:12.375 11:00:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:12.375 11:00:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:12.375 11:00:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.375 11:00:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.375 11:00:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.375 11:00:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.375 11:00:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.759 11:00:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.759 00:14:13.759 real 0m22.961s 00:14:13.759 user 1m3.935s 00:14:13.759 sys 0m7.440s 00:14:13.759 11:00:10 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.759 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:14:13.759 ************************************ 00:14:13.759 END TEST nvmf_lvol 00:14:13.759 ************************************ 00:14:13.759 11:00:10 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:13.759 11:00:10 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:13.759 11:00:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.759 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:14:13.759 ************************************ 00:14:13.759 START TEST nvmf_lvs_grow 00:14:13.759 ************************************ 00:14:13.759 11:00:10 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:13.759 * Looking for test storage... 00:14:13.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.759 11:00:10 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.759 11:00:10 -- nvmf/common.sh@7 -- # uname -s 00:14:13.759 11:00:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.759 11:00:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.759 11:00:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.759 11:00:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.759 11:00:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.759 11:00:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.759 11:00:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.759 11:00:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.759 11:00:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.759 11:00:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.759 11:00:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.759 11:00:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.759 11:00:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.759 11:00:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.759 11:00:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.759 11:00:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.759 11:00:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.759 11:00:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.759 11:00:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.759 11:00:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.759 11:00:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.759 11:00:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.759 11:00:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.759 11:00:10 -- paths/export.sh@5 -- # export PATH 00:14:13.759 11:00:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.759 11:00:10 -- nvmf/common.sh@47 -- # : 0 00:14:13.759 11:00:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.759 11:00:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.759 11:00:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.759 11:00:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.759 11:00:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.759 11:00:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.759 11:00:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.759 11:00:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.759 11:00:10 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.759 11:00:10 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:13.759 11:00:10 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:13.759 11:00:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:13.759 11:00:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.759 11:00:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:13.759 11:00:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:13.759 11:00:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:13.759 11:00:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.759 11:00:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.759 11:00:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.759 11:00:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:13.759 11:00:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:13.759 11:00:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.759 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.354 11:00:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:20.354 11:00:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.354 11:00:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.354 11:00:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.354 11:00:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.354 11:00:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.354 11:00:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.354 11:00:16 -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.354 11:00:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.354 11:00:16 -- nvmf/common.sh@296 -- # e810=() 00:14:20.354 11:00:16 -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.354 11:00:16 -- nvmf/common.sh@297 -- # x722=() 00:14:20.354 11:00:16 -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.354 11:00:16 -- nvmf/common.sh@298 -- # mlx=() 00:14:20.354 11:00:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.354 11:00:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.354 11:00:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.354 11:00:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.354 11:00:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.354 11:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.354 11:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:20.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:20.354 11:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.354 11:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:20.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:20.354 11:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.354 11:00:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.354 11:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.354 11:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.354 11:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:20.355 11:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.355 11:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:20.355 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:20.355 11:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.355 11:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.355 11:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.355 11:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:20.355 11:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.355 11:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:20.355 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:20.355 11:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.355 11:00:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:20.355 11:00:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:20.355 11:00:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:20.355 11:00:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:20.355 11:00:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:20.355 11:00:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.355 11:00:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.355 11:00:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.355 11:00:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.355 11:00:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.355 11:00:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.355 11:00:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.355 11:00:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.355 11:00:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.355 11:00:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.355 11:00:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.355 11:00:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.355 11:00:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.617 11:00:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.617 11:00:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.617 11:00:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.617 11:00:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.617 11:00:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.617 11:00:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.617 11:00:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:14:20.617 00:14:20.617 --- 10.0.0.2 ping statistics --- 00:14:20.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.617 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:14:20.617 11:00:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:14:20.617 00:14:20.617 --- 10.0.0.1 ping statistics --- 00:14:20.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.617 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:14:20.617 11:00:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.617 11:00:17 -- nvmf/common.sh@411 -- # return 0 00:14:20.617 11:00:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:20.617 11:00:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.617 11:00:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:20.617 11:00:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:20.617 11:00:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.617 11:00:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:20.617 11:00:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:20.617 11:00:17 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:20.617 11:00:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:20.617 11:00:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:20.617 11:00:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.617 11:00:17 -- nvmf/common.sh@470 -- # nvmfpid=288134 00:14:20.617 11:00:17 -- nvmf/common.sh@471 -- # waitforlisten 288134 00:14:20.617 11:00:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:20.617 11:00:17 -- common/autotest_common.sh@827 -- # '[' -z 288134 ']' 00:14:20.617 11:00:17 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.617 11:00:17 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:20.617 11:00:17 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.617 11:00:17 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:20.617 11:00:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.878 [2024-05-15 11:00:17.283459] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:14:20.878 [2024-05-15 11:00:17.283525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.878 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.878 [2024-05-15 11:00:17.359956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.878 [2024-05-15 11:00:17.433505] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.878 [2024-05-15 11:00:17.433555] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.878 [2024-05-15 11:00:17.433562] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.878 [2024-05-15 11:00:17.433569] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.878 [2024-05-15 11:00:17.433575] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.878 [2024-05-15 11:00:17.433595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.448 11:00:18 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:21.448 11:00:18 -- common/autotest_common.sh@860 -- # return 0 00:14:21.448 11:00:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:21.448 11:00:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.448 11:00:18 -- common/autotest_common.sh@10 -- # set +x 00:14:21.708 11:00:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.708 [2024-05-15 11:00:18.245084] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:21.708 11:00:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:21.708 11:00:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.708 11:00:18 -- common/autotest_common.sh@10 -- # set +x 00:14:21.708 ************************************ 00:14:21.708 START TEST lvs_grow_clean 00:14:21.708 ************************************ 00:14:21.708 11:00:18 -- common/autotest_common.sh@1121 -- # lvs_grow 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:21.708 11:00:18 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:21.969 11:00:18 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:21.969 11:00:18 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:22.230 11:00:18 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0bf00060-ee34-4bda-b258-aa038a973f21 00:14:22.230 11:00:18 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:22.230 11:00:18 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:22.230 11:00:18 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:22.230 11:00:18 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:22.230 11:00:18 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0bf00060-ee34-4bda-b258-aa038a973f21 lvol 150 00:14:22.491 11:00:18 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f8495dea-4c87-4018-bee8-c711c2900212 00:14:22.491 11:00:18 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:22.491 11:00:18 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:22.491 [2024-05-15 11:00:19.128052] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:22.491 [2024-05-15 11:00:19.128103] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:22.491 true 00:14:22.751 11:00:19 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:22.751 11:00:19 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:22.751 11:00:19 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:22.751 11:00:19 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:23.012 11:00:19 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8495dea-4c87-4018-bee8-c711c2900212 00:14:23.012 11:00:19 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:23.273 [2024-05-15 11:00:19.765798] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:23.273 [2024-05-15 11:00:19.766017] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.273 11:00:19 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.533 11:00:19 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=288697 00:14:23.533 11:00:19 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:23.534 11:00:19 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:23.534 11:00:19 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 288697 /var/tmp/bdevperf.sock 00:14:23.534 11:00:19 -- common/autotest_common.sh@827 -- # '[' -z 288697 ']' 00:14:23.534 11:00:19 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.534 11:00:19 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.534 11:00:19 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.534 11:00:19 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.534 11:00:19 -- common/autotest_common.sh@10 -- # set +x 00:14:23.534 [2024-05-15 11:00:20.010867] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:14:23.534 [2024-05-15 11:00:20.010922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288697 ] 00:14:23.534 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.534 [2024-05-15 11:00:20.093160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.534 [2024-05-15 11:00:20.157006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.475 11:00:20 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:24.475 11:00:20 -- common/autotest_common.sh@860 -- # return 0 00:14:24.475 11:00:20 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:24.475 Nvme0n1 00:14:24.475 11:00:21 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:24.736 [ 00:14:24.736 { 00:14:24.736 "name": "Nvme0n1", 00:14:24.736 "aliases": [ 00:14:24.736 "f8495dea-4c87-4018-bee8-c711c2900212" 00:14:24.736 ], 00:14:24.736 "product_name": "NVMe disk", 00:14:24.736 "block_size": 4096, 00:14:24.736 "num_blocks": 38912, 00:14:24.736 "uuid": "f8495dea-4c87-4018-bee8-c711c2900212", 00:14:24.736 "assigned_rate_limits": { 00:14:24.736 "rw_ios_per_sec": 0, 00:14:24.736 "rw_mbytes_per_sec": 0, 00:14:24.736 "r_mbytes_per_sec": 0, 00:14:24.736 "w_mbytes_per_sec": 0 00:14:24.736 }, 00:14:24.736 "claimed": false, 00:14:24.736 "zoned": false, 00:14:24.736 "supported_io_types": { 00:14:24.736 "read": true, 00:14:24.736 "write": true, 00:14:24.736 "unmap": true, 00:14:24.736 "write_zeroes": true, 00:14:24.736 "flush": true, 00:14:24.736 "reset": true, 00:14:24.736 "compare": true, 00:14:24.736 "compare_and_write": true, 00:14:24.736 "abort": true, 00:14:24.736 "nvme_admin": true, 00:14:24.736 "nvme_io": true 00:14:24.736 }, 00:14:24.736 "memory_domains": [ 00:14:24.736 { 00:14:24.736 "dma_device_id": "system", 00:14:24.736 "dma_device_type": 1 00:14:24.736 } 00:14:24.736 ], 00:14:24.736 "driver_specific": { 00:14:24.736 "nvme": [ 00:14:24.736 { 00:14:24.736 "trid": { 00:14:24.736 "trtype": "TCP", 00:14:24.736 "adrfam": "IPv4", 00:14:24.736 "traddr": "10.0.0.2", 00:14:24.736 "trsvcid": "4420", 00:14:24.736 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:24.736 }, 00:14:24.736 "ctrlr_data": { 00:14:24.736 "cntlid": 1, 00:14:24.736 "vendor_id": "0x8086", 00:14:24.736 "model_number": "SPDK bdev Controller", 00:14:24.736 "serial_number": "SPDK0", 00:14:24.736 "firmware_revision": "24.05", 00:14:24.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:24.736 "oacs": { 00:14:24.736 "security": 0, 00:14:24.736 "format": 0, 00:14:24.736 "firmware": 0, 00:14:24.736 "ns_manage": 0 00:14:24.736 }, 00:14:24.736 "multi_ctrlr": true, 00:14:24.736 "ana_reporting": false 00:14:24.736 }, 00:14:24.736 "vs": { 00:14:24.736 "nvme_version": "1.3" 00:14:24.736 }, 00:14:24.736 "ns_data": { 00:14:24.736 "id": 1, 00:14:24.736 "can_share": true 00:14:24.736 } 00:14:24.736 } 00:14:24.736 ], 00:14:24.736 "mp_policy": "active_passive" 00:14:24.736 } 00:14:24.736 } 00:14:24.736 ] 00:14:24.736 11:00:21 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.736 11:00:21 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=289030 00:14:24.736 11:00:21 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:24.736 Running I/O for 10 seconds... 00:14:25.677 Latency(us) 00:14:25.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.677 Nvme0n1 : 1.00 18746.00 73.23 0.00 0.00 0.00 0.00 0.00 00:14:25.677 =================================================================================================================== 00:14:25.677 Total : 18746.00 73.23 0.00 0.00 0.00 0.00 0.00 00:14:25.677 00:14:26.620 11:00:23 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:26.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.620 Nvme0n1 : 2.00 18843.00 73.61 0.00 0.00 0.00 0.00 0.00 00:14:26.620 =================================================================================================================== 00:14:26.620 Total : 18843.00 73.61 0.00 0.00 0.00 0.00 0.00 00:14:26.620 00:14:26.880 true 00:14:26.880 11:00:23 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:26.880 11:00:23 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:26.880 11:00:23 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:26.880 11:00:23 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:26.880 11:00:23 -- target/nvmf_lvs_grow.sh@65 -- # wait 289030 00:14:27.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.824 Nvme0n1 : 3.00 18877.67 73.74 0.00 0.00 0.00 0.00 0.00 00:14:27.824 =================================================================================================================== 00:14:27.824 Total : 18877.67 73.74 0.00 0.00 0.00 0.00 0.00 00:14:27.824 00:14:28.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.765 Nvme0n1 : 4.00 18905.50 73.85 0.00 0.00 0.00 0.00 0.00 00:14:28.765 =================================================================================================================== 00:14:28.765 Total : 18905.50 73.85 0.00 0.00 0.00 0.00 0.00 00:14:28.765 00:14:29.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.708 Nvme0n1 : 5.00 18923.60 73.92 0.00 0.00 0.00 0.00 0.00 00:14:29.708 =================================================================================================================== 00:14:29.708 Total : 18923.60 73.92 0.00 0.00 0.00 0.00 0.00 00:14:29.708 00:14:30.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.650 Nvme0n1 : 6.00 18936.83 73.97 0.00 0.00 0.00 0.00 0.00 00:14:30.650 =================================================================================================================== 00:14:30.650 Total : 18936.83 73.97 0.00 0.00 0.00 0.00 0.00 00:14:30.650 00:14:32.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.036 Nvme0n1 : 7.00 18954.71 74.04 0.00 0.00 0.00 0.00 0.00 00:14:32.036 =================================================================================================================== 00:14:32.036 Total : 18954.71 74.04 0.00 0.00 0.00 0.00 0.00 00:14:32.036 00:14:32.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.977 Nvme0n1 : 8.00 18968.38 74.10 0.00 0.00 0.00 0.00 0.00 00:14:32.977 =================================================================================================================== 00:14:32.977 Total : 18968.38 74.10 0.00 0.00 0.00 0.00 0.00 00:14:32.977 00:14:33.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.919 Nvme0n1 : 9.00 18971.67 74.11 0.00 0.00 0.00 0.00 0.00 00:14:33.919 =================================================================================================================== 00:14:33.919 Total : 18971.67 74.11 0.00 0.00 0.00 0.00 0.00 00:14:33.919 00:14:34.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.861 Nvme0n1 : 10.00 18980.50 74.14 0.00 0.00 0.00 0.00 0.00 00:14:34.861 =================================================================================================================== 00:14:34.861 Total : 18980.50 74.14 0.00 0.00 0.00 0.00 0.00 00:14:34.861 00:14:34.861 00:14:34.861 Latency(us) 00:14:34.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.861 Nvme0n1 : 10.01 18982.19 74.15 0.00 0.00 6738.98 3072.00 11905.71 00:14:34.861 =================================================================================================================== 00:14:34.861 Total : 18982.19 74.15 0.00 0.00 6738.98 3072.00 11905.71 00:14:34.861 0 00:14:34.861 11:00:31 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 288697 00:14:34.861 11:00:31 -- common/autotest_common.sh@946 -- # '[' -z 288697 ']' 00:14:34.861 11:00:31 -- common/autotest_common.sh@950 -- # kill -0 288697 00:14:34.861 11:00:31 -- common/autotest_common.sh@951 -- # uname 00:14:34.861 11:00:31 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:34.861 11:00:31 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 288697 00:14:34.861 11:00:31 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:34.861 11:00:31 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:34.861 11:00:31 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 288697' 00:14:34.861 killing process with pid 288697 00:14:34.861 11:00:31 -- common/autotest_common.sh@965 -- # kill 288697 00:14:34.861 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.861 00:14:34.861 Latency(us) 00:14:34.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.861 =================================================================================================================== 00:14:34.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.861 11:00:31 -- common/autotest_common.sh@970 -- # wait 288697 00:14:34.861 11:00:31 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.122 11:00:31 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:35.383 11:00:31 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:35.383 11:00:31 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:35.383 11:00:31 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:35.383 11:00:31 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:35.383 11:00:31 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:35.644 [2024-05-15 11:00:32.075938] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:35.644 11:00:32 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:35.644 11:00:32 -- common/autotest_common.sh@648 -- # local es=0 00:14:35.644 11:00:32 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:35.644 11:00:32 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.644 11:00:32 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.644 11:00:32 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.644 11:00:32 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.644 11:00:32 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.644 11:00:32 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.644 11:00:32 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.644 11:00:32 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:35.644 11:00:32 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:35.644 request: 00:14:35.644 { 00:14:35.644 "uuid": "0bf00060-ee34-4bda-b258-aa038a973f21", 00:14:35.644 "method": "bdev_lvol_get_lvstores", 00:14:35.644 "req_id": 1 00:14:35.644 } 00:14:35.644 Got JSON-RPC error response 00:14:35.644 response: 00:14:35.644 { 00:14:35.644 "code": -19, 00:14:35.644 "message": "No such device" 00:14:35.644 } 00:14:35.644 11:00:32 -- common/autotest_common.sh@651 -- # es=1 00:14:35.644 11:00:32 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:35.644 11:00:32 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:35.644 11:00:32 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:35.644 11:00:32 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.904 aio_bdev 00:14:35.904 11:00:32 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f8495dea-4c87-4018-bee8-c711c2900212 00:14:35.904 11:00:32 -- common/autotest_common.sh@895 -- # local bdev_name=f8495dea-4c87-4018-bee8-c711c2900212 00:14:35.904 11:00:32 -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:35.904 11:00:32 -- common/autotest_common.sh@897 -- # local i 00:14:35.904 11:00:32 -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:35.904 11:00:32 -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:35.904 11:00:32 -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:36.170 11:00:32 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f8495dea-4c87-4018-bee8-c711c2900212 -t 2000 00:14:36.170 [ 00:14:36.170 { 00:14:36.170 "name": "f8495dea-4c87-4018-bee8-c711c2900212", 00:14:36.170 "aliases": [ 00:14:36.170 "lvs/lvol" 00:14:36.170 ], 00:14:36.170 "product_name": "Logical Volume", 00:14:36.170 "block_size": 4096, 00:14:36.170 "num_blocks": 38912, 00:14:36.170 "uuid": "f8495dea-4c87-4018-bee8-c711c2900212", 00:14:36.170 "assigned_rate_limits": { 00:14:36.170 "rw_ios_per_sec": 0, 00:14:36.170 "rw_mbytes_per_sec": 0, 00:14:36.170 "r_mbytes_per_sec": 0, 00:14:36.170 "w_mbytes_per_sec": 0 00:14:36.170 }, 00:14:36.170 "claimed": false, 00:14:36.170 "zoned": false, 00:14:36.170 "supported_io_types": { 00:14:36.170 "read": true, 00:14:36.170 "write": true, 00:14:36.170 "unmap": true, 00:14:36.170 "write_zeroes": true, 00:14:36.170 "flush": false, 00:14:36.170 "reset": true, 00:14:36.170 "compare": false, 00:14:36.170 "compare_and_write": false, 00:14:36.170 "abort": false, 00:14:36.170 "nvme_admin": false, 00:14:36.170 "nvme_io": false 00:14:36.170 }, 00:14:36.170 "driver_specific": { 00:14:36.171 "lvol": { 00:14:36.171 "lvol_store_uuid": "0bf00060-ee34-4bda-b258-aa038a973f21", 00:14:36.171 "base_bdev": "aio_bdev", 00:14:36.171 "thin_provision": false, 00:14:36.171 "snapshot": false, 00:14:36.171 "clone": false, 00:14:36.171 "esnap_clone": false 00:14:36.171 } 00:14:36.171 } 00:14:36.171 } 00:14:36.171 ] 00:14:36.171 11:00:32 -- common/autotest_common.sh@903 -- # return 0 00:14:36.171 11:00:32 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:36.171 11:00:32 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:36.434 11:00:32 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:36.434 11:00:32 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:36.434 11:00:32 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:36.434 11:00:33 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:36.434 11:00:33 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f8495dea-4c87-4018-bee8-c711c2900212 00:14:36.693 11:00:33 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0bf00060-ee34-4bda-b258-aa038a973f21 00:14:36.693 11:00:33 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:36.952 00:14:36.952 real 0m15.208s 00:14:36.952 user 0m14.967s 00:14:36.952 sys 0m1.183s 00:14:36.952 11:00:33 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:36.952 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 ************************************ 00:14:36.952 END TEST lvs_grow_clean 00:14:36.952 ************************************ 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:36.952 11:00:33 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:36.952 11:00:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:36.952 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 ************************************ 00:14:36.952 START TEST lvs_grow_dirty 00:14:36.952 ************************************ 00:14:36.952 11:00:33 -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:36.952 11:00:33 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.211 11:00:33 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.211 11:00:33 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:37.211 11:00:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:37.211 11:00:33 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:37.473 11:00:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1849eb97-97c8-4d13-8824-bb9da421d741 00:14:37.473 11:00:33 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:37.473 11:00:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:37.473 11:00:34 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:37.473 11:00:34 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:37.473 11:00:34 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1849eb97-97c8-4d13-8824-bb9da421d741 lvol 150 00:14:37.734 11:00:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7fa77795-60c0-4a31-a0de-94a861b6503a 00:14:37.734 11:00:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.734 11:00:34 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:37.734 [2024-05-15 11:00:34.385914] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:37.734 [2024-05-15 11:00:34.385965] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:37.994 true 00:14:37.994 11:00:34 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:37.994 11:00:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:37.994 11:00:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:37.994 11:00:34 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:38.255 11:00:34 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7fa77795-60c0-4a31-a0de-94a861b6503a 00:14:38.255 11:00:34 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:38.515 [2024-05-15 11:00:34.995769] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.515 11:00:35 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.515 11:00:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=291768 00:14:38.515 11:00:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.515 11:00:35 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:38.515 11:00:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 291768 /var/tmp/bdevperf.sock 00:14:38.515 11:00:35 -- common/autotest_common.sh@827 -- # '[' -z 291768 ']' 00:14:38.515 11:00:35 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.515 11:00:35 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:38.515 11:00:35 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.515 11:00:35 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:38.515 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.775 [2024-05-15 11:00:35.193152] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:14:38.775 [2024-05-15 11:00:35.193201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291768 ] 00:14:38.775 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.775 [2024-05-15 11:00:35.266591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.775 [2024-05-15 11:00:35.319852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.356 11:00:35 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:39.356 11:00:35 -- common/autotest_common.sh@860 -- # return 0 00:14:39.356 11:00:35 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:39.616 Nvme0n1 00:14:39.616 11:00:36 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:39.877 [ 00:14:39.877 { 00:14:39.877 "name": "Nvme0n1", 00:14:39.877 "aliases": [ 00:14:39.877 "7fa77795-60c0-4a31-a0de-94a861b6503a" 00:14:39.877 ], 00:14:39.877 "product_name": "NVMe disk", 00:14:39.877 "block_size": 4096, 00:14:39.877 "num_blocks": 38912, 00:14:39.877 "uuid": "7fa77795-60c0-4a31-a0de-94a861b6503a", 00:14:39.877 "assigned_rate_limits": { 00:14:39.877 "rw_ios_per_sec": 0, 00:14:39.877 "rw_mbytes_per_sec": 0, 00:14:39.877 "r_mbytes_per_sec": 0, 00:14:39.877 "w_mbytes_per_sec": 0 00:14:39.877 }, 00:14:39.877 "claimed": false, 00:14:39.877 "zoned": false, 00:14:39.877 "supported_io_types": { 00:14:39.877 "read": true, 00:14:39.877 "write": true, 00:14:39.877 "unmap": true, 00:14:39.877 "write_zeroes": true, 00:14:39.877 "flush": true, 00:14:39.877 "reset": true, 00:14:39.877 "compare": true, 00:14:39.877 "compare_and_write": true, 00:14:39.877 "abort": true, 00:14:39.877 "nvme_admin": true, 00:14:39.877 "nvme_io": true 00:14:39.877 }, 00:14:39.877 "memory_domains": [ 00:14:39.877 { 00:14:39.877 "dma_device_id": "system", 00:14:39.877 "dma_device_type": 1 00:14:39.877 } 00:14:39.877 ], 00:14:39.877 "driver_specific": { 00:14:39.877 "nvme": [ 00:14:39.877 { 00:14:39.877 "trid": { 00:14:39.877 "trtype": "TCP", 00:14:39.877 "adrfam": "IPv4", 00:14:39.877 "traddr": "10.0.0.2", 00:14:39.877 "trsvcid": "4420", 00:14:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:39.877 }, 00:14:39.877 "ctrlr_data": { 00:14:39.877 "cntlid": 1, 00:14:39.877 "vendor_id": "0x8086", 00:14:39.877 "model_number": "SPDK bdev Controller", 00:14:39.877 "serial_number": "SPDK0", 00:14:39.878 "firmware_revision": "24.05", 00:14:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.878 "oacs": { 00:14:39.878 "security": 0, 00:14:39.878 "format": 0, 00:14:39.878 "firmware": 0, 00:14:39.878 "ns_manage": 0 00:14:39.878 }, 00:14:39.878 "multi_ctrlr": true, 00:14:39.878 "ana_reporting": false 00:14:39.878 }, 00:14:39.878 "vs": { 00:14:39.878 "nvme_version": "1.3" 00:14:39.878 }, 00:14:39.878 "ns_data": { 00:14:39.878 "id": 1, 00:14:39.878 "can_share": true 00:14:39.878 } 00:14:39.878 } 00:14:39.878 ], 00:14:39.878 "mp_policy": "active_passive" 00:14:39.878 } 00:14:39.878 } 00:14:39.878 ] 00:14:39.878 11:00:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=292076 00:14:39.878 11:00:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:39.878 11:00:36 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:39.878 Running I/O for 10 seconds... 00:14:40.819 Latency(us) 00:14:40.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.819 Nvme0n1 : 1.00 18620.00 72.73 0.00 0.00 0.00 0.00 0.00 00:14:40.819 =================================================================================================================== 00:14:40.819 Total : 18620.00 72.73 0.00 0.00 0.00 0.00 0.00 00:14:40.819 00:14:41.762 11:00:38 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:41.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.762 Nvme0n1 : 2.00 18770.50 73.32 0.00 0.00 0.00 0.00 0.00 00:14:41.762 =================================================================================================================== 00:14:41.762 Total : 18770.50 73.32 0.00 0.00 0.00 0.00 0.00 00:14:41.762 00:14:42.033 true 00:14:42.034 11:00:38 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:42.034 11:00:38 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:42.034 11:00:38 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:42.034 11:00:38 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:42.034 11:00:38 -- target/nvmf_lvs_grow.sh@65 -- # wait 292076 00:14:42.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.977 Nvme0n1 : 3.00 18819.00 73.51 0.00 0.00 0.00 0.00 0.00 00:14:42.977 =================================================================================================================== 00:14:42.978 Total : 18819.00 73.51 0.00 0.00 0.00 0.00 0.00 00:14:42.978 00:14:43.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.927 Nvme0n1 : 4.00 18876.00 73.73 0.00 0.00 0.00 0.00 0.00 00:14:43.927 =================================================================================================================== 00:14:43.927 Total : 18876.00 73.73 0.00 0.00 0.00 0.00 0.00 00:14:43.927 00:14:44.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.868 Nvme0n1 : 5.00 18898.40 73.82 0.00 0.00 0.00 0.00 0.00 00:14:44.868 =================================================================================================================== 00:14:44.868 Total : 18898.40 73.82 0.00 0.00 0.00 0.00 0.00 00:14:44.868 00:14:45.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.810 Nvme0n1 : 6.00 18925.17 73.93 0.00 0.00 0.00 0.00 0.00 00:14:45.810 =================================================================================================================== 00:14:45.810 Total : 18925.17 73.93 0.00 0.00 0.00 0.00 0.00 00:14:45.810 00:14:46.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.751 Nvme0n1 : 7.00 18951.14 74.03 0.00 0.00 0.00 0.00 0.00 00:14:46.751 =================================================================================================================== 00:14:46.751 Total : 18951.14 74.03 0.00 0.00 0.00 0.00 0.00 00:14:46.751 00:14:48.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.135 Nvme0n1 : 8.00 18955.38 74.04 0.00 0.00 0.00 0.00 0.00 00:14:48.135 =================================================================================================================== 00:14:48.135 Total : 18955.38 74.04 0.00 0.00 0.00 0.00 0.00 00:14:48.135 00:14:49.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.074 Nvme0n1 : 9.00 18970.89 74.11 0.00 0.00 0.00 0.00 0.00 00:14:49.074 =================================================================================================================== 00:14:49.074 Total : 18970.89 74.11 0.00 0.00 0.00 0.00 0.00 00:14:49.074 00:14:50.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.016 Nvme0n1 : 10.00 18984.60 74.16 0.00 0.00 0.00 0.00 0.00 00:14:50.016 =================================================================================================================== 00:14:50.016 Total : 18984.60 74.16 0.00 0.00 0.00 0.00 0.00 00:14:50.016 00:14:50.016 00:14:50.016 Latency(us) 00:14:50.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.016 Nvme0n1 : 10.01 18987.30 74.17 0.00 0.00 6738.37 4068.69 15947.09 00:14:50.016 =================================================================================================================== 00:14:50.016 Total : 18987.30 74.17 0.00 0.00 6738.37 4068.69 15947.09 00:14:50.016 0 00:14:50.016 11:00:46 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 291768 00:14:50.016 11:00:46 -- common/autotest_common.sh@946 -- # '[' -z 291768 ']' 00:14:50.016 11:00:46 -- common/autotest_common.sh@950 -- # kill -0 291768 00:14:50.016 11:00:46 -- common/autotest_common.sh@951 -- # uname 00:14:50.016 11:00:46 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:50.016 11:00:46 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 291768 00:14:50.016 11:00:46 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:50.016 11:00:46 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:50.016 11:00:46 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 291768' 00:14:50.016 killing process with pid 291768 00:14:50.016 11:00:46 -- common/autotest_common.sh@965 -- # kill 291768 00:14:50.016 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.016 00:14:50.016 Latency(us) 00:14:50.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.016 =================================================================================================================== 00:14:50.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.016 11:00:46 -- common/autotest_common.sh@970 -- # wait 291768 00:14:50.016 11:00:46 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.277 11:00:46 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:50.277 11:00:46 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:50.277 11:00:46 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:50.537 11:00:47 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:50.537 11:00:47 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:50.537 11:00:47 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 288134 00:14:50.537 11:00:47 -- target/nvmf_lvs_grow.sh@75 -- # wait 288134 00:14:50.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 288134 Killed "${NVMF_APP[@]}" "$@" 00:14:50.537 11:00:47 -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:50.537 11:00:47 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:50.537 11:00:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:50.537 11:00:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:50.537 11:00:47 -- common/autotest_common.sh@10 -- # set +x 00:14:50.537 11:00:47 -- nvmf/common.sh@470 -- # nvmfpid=294127 00:14:50.537 11:00:47 -- nvmf/common.sh@471 -- # waitforlisten 294127 00:14:50.537 11:00:47 -- common/autotest_common.sh@827 -- # '[' -z 294127 ']' 00:14:50.537 11:00:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:50.537 11:00:47 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.537 11:00:47 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:50.537 11:00:47 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.537 11:00:47 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:50.537 11:00:47 -- common/autotest_common.sh@10 -- # set +x 00:14:50.537 [2024-05-15 11:00:47.174508] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:14:50.537 [2024-05-15 11:00:47.174565] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.797 [2024-05-15 11:00:47.239444] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.797 [2024-05-15 11:00:47.303277] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.797 [2024-05-15 11:00:47.303312] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.797 [2024-05-15 11:00:47.303319] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.797 [2024-05-15 11:00:47.303325] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.797 [2024-05-15 11:00:47.303331] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.797 [2024-05-15 11:00:47.303352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.376 11:00:47 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:51.376 11:00:47 -- common/autotest_common.sh@860 -- # return 0 00:14:51.376 11:00:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:51.376 11:00:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.376 11:00:47 -- common/autotest_common.sh@10 -- # set +x 00:14:51.376 11:00:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.376 11:00:47 -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:51.638 [2024-05-15 11:00:48.112252] blobstore.c:4789:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:51.638 [2024-05-15 11:00:48.112332] blobstore.c:4736:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:51.638 [2024-05-15 11:00:48.112361] blobstore.c:4736:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:51.638 11:00:48 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:51.638 11:00:48 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7fa77795-60c0-4a31-a0de-94a861b6503a 00:14:51.638 11:00:48 -- common/autotest_common.sh@895 -- # local bdev_name=7fa77795-60c0-4a31-a0de-94a861b6503a 00:14:51.638 11:00:48 -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:51.638 11:00:48 -- common/autotest_common.sh@897 -- # local i 00:14:51.638 11:00:48 -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:51.638 11:00:48 -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:51.638 11:00:48 -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:51.638 11:00:48 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7fa77795-60c0-4a31-a0de-94a861b6503a -t 2000 00:14:51.900 [ 00:14:51.900 { 00:14:51.900 "name": "7fa77795-60c0-4a31-a0de-94a861b6503a", 00:14:51.900 "aliases": [ 00:14:51.900 "lvs/lvol" 00:14:51.900 ], 00:14:51.900 "product_name": "Logical Volume", 00:14:51.900 "block_size": 4096, 00:14:51.900 "num_blocks": 38912, 00:14:51.900 "uuid": "7fa77795-60c0-4a31-a0de-94a861b6503a", 00:14:51.900 "assigned_rate_limits": { 00:14:51.900 "rw_ios_per_sec": 0, 00:14:51.900 "rw_mbytes_per_sec": 0, 00:14:51.900 "r_mbytes_per_sec": 0, 00:14:51.900 "w_mbytes_per_sec": 0 00:14:51.900 }, 00:14:51.900 "claimed": false, 00:14:51.900 "zoned": false, 00:14:51.900 "supported_io_types": { 00:14:51.900 "read": true, 00:14:51.900 "write": true, 00:14:51.900 "unmap": true, 00:14:51.900 "write_zeroes": true, 00:14:51.900 "flush": false, 00:14:51.900 "reset": true, 00:14:51.900 "compare": false, 00:14:51.900 "compare_and_write": false, 00:14:51.900 "abort": false, 00:14:51.900 "nvme_admin": false, 00:14:51.900 "nvme_io": false 00:14:51.900 }, 00:14:51.900 "driver_specific": { 00:14:51.900 "lvol": { 00:14:51.900 "lvol_store_uuid": "1849eb97-97c8-4d13-8824-bb9da421d741", 00:14:51.900 "base_bdev": "aio_bdev", 00:14:51.900 "thin_provision": false, 00:14:51.900 "snapshot": false, 00:14:51.900 "clone": false, 00:14:51.900 "esnap_clone": false 00:14:51.900 } 00:14:51.900 } 00:14:51.900 } 00:14:51.900 ] 00:14:51.900 11:00:48 -- common/autotest_common.sh@903 -- # return 0 00:14:51.900 11:00:48 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:51.900 11:00:48 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:52.162 11:00:48 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:52.162 11:00:48 -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:52.162 11:00:48 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:52.162 11:00:48 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:52.162 11:00:48 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:52.424 [2024-05-15 11:00:48.872122] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:52.424 11:00:48 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:52.424 11:00:48 -- common/autotest_common.sh@648 -- # local es=0 00:14:52.424 11:00:48 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:52.424 11:00:48 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.424 11:00:48 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.424 11:00:48 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.424 11:00:48 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.424 11:00:48 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.424 11:00:48 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.424 11:00:48 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.424 11:00:48 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:52.424 11:00:48 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:52.424 request: 00:14:52.424 { 00:14:52.424 "uuid": "1849eb97-97c8-4d13-8824-bb9da421d741", 00:14:52.424 "method": "bdev_lvol_get_lvstores", 00:14:52.424 "req_id": 1 00:14:52.424 } 00:14:52.424 Got JSON-RPC error response 00:14:52.424 response: 00:14:52.424 { 00:14:52.424 "code": -19, 00:14:52.424 "message": "No such device" 00:14:52.424 } 00:14:52.685 11:00:49 -- common/autotest_common.sh@651 -- # es=1 00:14:52.685 11:00:49 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.685 11:00:49 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.685 11:00:49 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.685 11:00:49 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:52.685 aio_bdev 00:14:52.685 11:00:49 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7fa77795-60c0-4a31-a0de-94a861b6503a 00:14:52.685 11:00:49 -- common/autotest_common.sh@895 -- # local bdev_name=7fa77795-60c0-4a31-a0de-94a861b6503a 00:14:52.685 11:00:49 -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:52.685 11:00:49 -- common/autotest_common.sh@897 -- # local i 00:14:52.685 11:00:49 -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:52.685 11:00:49 -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:52.685 11:00:49 -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:52.946 11:00:49 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7fa77795-60c0-4a31-a0de-94a861b6503a -t 2000 00:14:52.946 [ 00:14:52.946 { 00:14:52.946 "name": "7fa77795-60c0-4a31-a0de-94a861b6503a", 00:14:52.946 "aliases": [ 00:14:52.946 "lvs/lvol" 00:14:52.946 ], 00:14:52.946 "product_name": "Logical Volume", 00:14:52.946 "block_size": 4096, 00:14:52.946 "num_blocks": 38912, 00:14:52.946 "uuid": "7fa77795-60c0-4a31-a0de-94a861b6503a", 00:14:52.946 "assigned_rate_limits": { 00:14:52.946 "rw_ios_per_sec": 0, 00:14:52.946 "rw_mbytes_per_sec": 0, 00:14:52.946 "r_mbytes_per_sec": 0, 00:14:52.946 "w_mbytes_per_sec": 0 00:14:52.946 }, 00:14:52.947 "claimed": false, 00:14:52.947 "zoned": false, 00:14:52.947 "supported_io_types": { 00:14:52.947 "read": true, 00:14:52.947 "write": true, 00:14:52.947 "unmap": true, 00:14:52.947 "write_zeroes": true, 00:14:52.947 "flush": false, 00:14:52.947 "reset": true, 00:14:52.947 "compare": false, 00:14:52.947 "compare_and_write": false, 00:14:52.947 "abort": false, 00:14:52.947 "nvme_admin": false, 00:14:52.947 "nvme_io": false 00:14:52.947 }, 00:14:52.947 "driver_specific": { 00:14:52.947 "lvol": { 00:14:52.947 "lvol_store_uuid": "1849eb97-97c8-4d13-8824-bb9da421d741", 00:14:52.947 "base_bdev": "aio_bdev", 00:14:52.947 "thin_provision": false, 00:14:52.947 "snapshot": false, 00:14:52.947 "clone": false, 00:14:52.947 "esnap_clone": false 00:14:52.947 } 00:14:52.947 } 00:14:52.947 } 00:14:52.947 ] 00:14:52.947 11:00:49 -- common/autotest_common.sh@903 -- # return 0 00:14:52.947 11:00:49 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:52.947 11:00:49 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:53.207 11:00:49 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:53.207 11:00:49 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:53.207 11:00:49 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:53.469 11:00:49 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:53.469 11:00:49 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7fa77795-60c0-4a31-a0de-94a861b6503a 00:14:53.469 11:00:50 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1849eb97-97c8-4d13-8824-bb9da421d741 00:14:53.730 11:00:50 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:53.991 11:00:50 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:53.991 00:14:53.991 real 0m16.821s 00:14:53.991 user 0m44.161s 00:14:53.991 sys 0m2.770s 00:14:53.991 11:00:50 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:53.991 11:00:50 -- common/autotest_common.sh@10 -- # set +x 00:14:53.991 ************************************ 00:14:53.991 END TEST lvs_grow_dirty 00:14:53.991 ************************************ 00:14:53.991 11:00:50 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:53.991 11:00:50 -- common/autotest_common.sh@804 -- # type=--id 00:14:53.991 11:00:50 -- common/autotest_common.sh@805 -- # id=0 00:14:53.991 11:00:50 -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:53.991 11:00:50 -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:53.991 11:00:50 -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:53.991 11:00:50 -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:53.991 11:00:50 -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:53.991 11:00:50 -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:53.991 nvmf_trace.0 00:14:53.991 11:00:50 -- common/autotest_common.sh@819 -- # return 0 00:14:53.991 11:00:50 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:53.991 11:00:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:53.991 11:00:50 -- nvmf/common.sh@117 -- # sync 00:14:53.991 11:00:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.991 11:00:50 -- nvmf/common.sh@120 -- # set +e 00:14:53.991 11:00:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.991 11:00:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.991 rmmod nvme_tcp 00:14:53.991 rmmod nvme_fabrics 00:14:53.991 rmmod nvme_keyring 00:14:53.991 11:00:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.991 11:00:50 -- nvmf/common.sh@124 -- # set -e 00:14:53.991 11:00:50 -- nvmf/common.sh@125 -- # return 0 00:14:53.991 11:00:50 -- nvmf/common.sh@478 -- # '[' -n 294127 ']' 00:14:53.991 11:00:50 -- nvmf/common.sh@479 -- # killprocess 294127 00:14:53.991 11:00:50 -- common/autotest_common.sh@946 -- # '[' -z 294127 ']' 00:14:53.991 11:00:50 -- common/autotest_common.sh@950 -- # kill -0 294127 00:14:53.991 11:00:50 -- common/autotest_common.sh@951 -- # uname 00:14:53.991 11:00:50 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.991 11:00:50 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 294127 00:14:53.991 11:00:50 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:53.991 11:00:50 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:53.991 11:00:50 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 294127' 00:14:53.991 killing process with pid 294127 00:14:53.991 11:00:50 -- common/autotest_common.sh@965 -- # kill 294127 00:14:53.991 11:00:50 -- common/autotest_common.sh@970 -- # wait 294127 00:14:54.252 11:00:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:54.252 11:00:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:54.252 11:00:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:54.252 11:00:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.252 11:00:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.252 11:00:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.252 11:00:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.252 11:00:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.805 11:00:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.805 00:14:56.805 real 0m42.678s 00:14:56.805 user 1m5.105s 00:14:56.805 sys 0m9.457s 00:14:56.805 11:00:52 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:56.805 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:14:56.805 ************************************ 00:14:56.805 END TEST nvmf_lvs_grow 00:14:56.805 ************************************ 00:14:56.805 11:00:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:56.805 11:00:52 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:56.805 11:00:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.805 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:14:56.805 ************************************ 00:14:56.805 START TEST nvmf_bdev_io_wait 00:14:56.805 ************************************ 00:14:56.805 11:00:52 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:56.805 * Looking for test storage... 00:14:56.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.805 11:00:53 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.805 11:00:53 -- nvmf/common.sh@7 -- # uname -s 00:14:56.805 11:00:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.805 11:00:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.805 11:00:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.805 11:00:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.805 11:00:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.805 11:00:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.805 11:00:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.805 11:00:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.805 11:00:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.805 11:00:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.805 11:00:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.805 11:00:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.805 11:00:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.805 11:00:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.805 11:00:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.805 11:00:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.805 11:00:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.805 11:00:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.805 11:00:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.805 11:00:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.805 11:00:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.805 11:00:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.805 11:00:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.805 11:00:53 -- paths/export.sh@5 -- # export PATH 00:14:56.805 11:00:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.805 11:00:53 -- nvmf/common.sh@47 -- # : 0 00:14:56.805 11:00:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.805 11:00:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.805 11:00:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.805 11:00:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.805 11:00:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.805 11:00:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.805 11:00:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.805 11:00:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.805 11:00:53 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.805 11:00:53 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.805 11:00:53 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:56.805 11:00:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:56.805 11:00:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.805 11:00:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:56.805 11:00:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:56.805 11:00:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:56.805 11:00:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.805 11:00:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.805 11:00:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.805 11:00:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:56.805 11:00:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:56.805 11:00:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.805 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:15:03.401 11:00:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:03.401 11:00:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.401 11:00:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.401 11:00:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.401 11:00:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.401 11:00:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.401 11:00:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.401 11:00:59 -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.401 11:00:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.401 11:00:59 -- nvmf/common.sh@296 -- # e810=() 00:15:03.401 11:00:59 -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.401 11:00:59 -- nvmf/common.sh@297 -- # x722=() 00:15:03.401 11:00:59 -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.401 11:00:59 -- nvmf/common.sh@298 -- # mlx=() 00:15:03.401 11:00:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.401 11:00:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.401 11:00:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.401 11:00:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.401 11:00:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.401 11:00:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.401 11:00:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:03.401 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:03.401 11:00:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.401 11:00:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:03.401 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:03.401 11:00:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.401 11:00:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.401 11:00:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.401 11:00:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.401 11:00:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:03.401 11:00:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.401 11:00:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:03.401 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:03.401 11:00:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.401 11:00:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.401 11:00:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.401 11:00:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:03.401 11:00:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.402 11:00:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:03.402 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:03.402 11:00:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.402 11:00:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:03.402 11:00:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:03.402 11:00:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:03.402 11:00:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:03.402 11:00:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:03.402 11:00:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.402 11:00:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.402 11:00:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.402 11:00:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.402 11:00:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.402 11:00:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.402 11:00:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.402 11:00:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.402 11:00:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.402 11:00:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.402 11:00:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.402 11:00:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.402 11:00:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.402 11:00:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.402 11:00:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.402 11:00:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.402 11:00:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.402 11:00:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.402 11:00:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.402 11:00:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:03.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:15:03.402 00:15:03.402 --- 10.0.0.2 ping statistics --- 00:15:03.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.402 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:15:03.402 11:00:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:15:03.402 00:15:03.402 --- 10.0.0.1 ping statistics --- 00:15:03.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.402 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:03.402 11:00:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.402 11:00:59 -- nvmf/common.sh@411 -- # return 0 00:15:03.402 11:00:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:03.402 11:00:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.402 11:00:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:03.402 11:00:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:03.402 11:00:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.402 11:00:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:03.402 11:00:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:03.402 11:00:59 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:03.402 11:00:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:03.402 11:00:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:03.402 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:15:03.402 11:00:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:03.402 11:00:59 -- nvmf/common.sh@470 -- # nvmfpid=298930 00:15:03.402 11:00:59 -- nvmf/common.sh@471 -- # waitforlisten 298930 00:15:03.402 11:00:59 -- common/autotest_common.sh@827 -- # '[' -z 298930 ']' 00:15:03.402 11:01:00 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.402 11:01:00 -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:03.402 11:01:00 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.402 11:01:00 -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:03.402 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:03.402 [2024-05-15 11:01:00.050128] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:03.402 [2024-05-15 11:01:00.050198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.663 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.663 [2024-05-15 11:01:00.121824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.663 [2024-05-15 11:01:00.199316] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.663 [2024-05-15 11:01:00.199357] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.663 [2024-05-15 11:01:00.199365] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.664 [2024-05-15 11:01:00.199371] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.664 [2024-05-15 11:01:00.199377] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.664 [2024-05-15 11:01:00.199530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.664 [2024-05-15 11:01:00.199630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.664 [2024-05-15 11:01:00.199744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.664 [2024-05-15 11:01:00.199745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.235 11:01:00 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.235 11:01:00 -- common/autotest_common.sh@860 -- # return 0 00:15:04.235 11:01:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:04.235 11:01:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.235 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:04.235 11:01:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.235 11:01:00 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:04.235 11:01:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.235 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:04.235 11:01:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.235 11:01:00 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:04.235 11:01:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.235 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:04.514 11:01:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.514 11:01:00 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.514 11:01:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.514 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:04.514 [2024-05-15 11:01:00.939639] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.514 11:01:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.514 11:01:00 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.514 11:01:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.514 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:04.514 Malloc0 00:15:04.514 11:01:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.514 11:01:00 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:04.514 11:01:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.514 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:04.514 11:01:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.514 11:01:00 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.514 11:01:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.514 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:15:04.514 11:01:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.514 11:01:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.514 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:15:04.514 [2024-05-15 11:01:01.007660] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:04.514 [2024-05-15 11:01:01.007891] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.514 11:01:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=299213 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@30 -- # READ_PID=299215 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # config=() 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # local subsystem config 00:15:04.514 11:01:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:04.514 { 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme$subsystem", 00:15:04.514 "trtype": "$TEST_TRANSPORT", 00:15:04.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "$NVMF_PORT", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.514 "hdgst": ${hdgst:-false}, 00:15:04.514 "ddgst": ${ddgst:-false} 00:15:04.514 }, 00:15:04.514 "method": "bdev_nvme_attach_controller" 00:15:04.514 } 00:15:04.514 EOF 00:15:04.514 )") 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=299217 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # config=() 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # local subsystem config 00:15:04.514 11:01:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=299220 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:04.514 { 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme$subsystem", 00:15:04.514 "trtype": "$TEST_TRANSPORT", 00:15:04.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "$NVMF_PORT", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.514 "hdgst": ${hdgst:-false}, 00:15:04.514 "ddgst": ${ddgst:-false} 00:15:04.514 }, 00:15:04.514 "method": "bdev_nvme_attach_controller" 00:15:04.514 } 00:15:04.514 EOF 00:15:04.514 )") 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@35 -- # sync 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # cat 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # config=() 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # local subsystem config 00:15:04.514 11:01:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:04.514 { 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme$subsystem", 00:15:04.514 "trtype": "$TEST_TRANSPORT", 00:15:04.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "$NVMF_PORT", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.514 "hdgst": ${hdgst:-false}, 00:15:04.514 "ddgst": ${ddgst:-false} 00:15:04.514 }, 00:15:04.514 "method": "bdev_nvme_attach_controller" 00:15:04.514 } 00:15:04.514 EOF 00:15:04.514 )") 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # config=() 00:15:04.514 11:01:01 -- nvmf/common.sh@521 -- # local subsystem config 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # cat 00:15:04.514 11:01:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:04.514 { 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme$subsystem", 00:15:04.514 "trtype": "$TEST_TRANSPORT", 00:15:04.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "$NVMF_PORT", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.514 "hdgst": ${hdgst:-false}, 00:15:04.514 "ddgst": ${ddgst:-false} 00:15:04.514 }, 00:15:04.514 "method": "bdev_nvme_attach_controller" 00:15:04.514 } 00:15:04.514 EOF 00:15:04.514 )") 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # cat 00:15:04.514 11:01:01 -- target/bdev_io_wait.sh@37 -- # wait 299213 00:15:04.514 11:01:01 -- nvmf/common.sh@543 -- # cat 00:15:04.514 11:01:01 -- nvmf/common.sh@545 -- # jq . 00:15:04.514 11:01:01 -- nvmf/common.sh@545 -- # jq . 00:15:04.514 11:01:01 -- nvmf/common.sh@545 -- # jq . 00:15:04.514 11:01:01 -- nvmf/common.sh@546 -- # IFS=, 00:15:04.514 11:01:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme1", 00:15:04.514 "trtype": "tcp", 00:15:04.514 "traddr": "10.0.0.2", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "4420", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.514 "hdgst": false, 00:15:04.514 "ddgst": false 00:15:04.514 }, 00:15:04.514 "method": "bdev_nvme_attach_controller" 00:15:04.514 }' 00:15:04.514 11:01:01 -- nvmf/common.sh@545 -- # jq . 00:15:04.514 11:01:01 -- nvmf/common.sh@546 -- # IFS=, 00:15:04.514 11:01:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme1", 00:15:04.514 "trtype": "tcp", 00:15:04.514 "traddr": "10.0.0.2", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "4420", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.514 "hdgst": false, 00:15:04.514 "ddgst": false 00:15:04.514 }, 00:15:04.514 "method": "bdev_nvme_attach_controller" 00:15:04.514 }' 00:15:04.514 11:01:01 -- nvmf/common.sh@546 -- # IFS=, 00:15:04.514 11:01:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme1", 00:15:04.514 "trtype": "tcp", 00:15:04.514 "traddr": "10.0.0.2", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "4420", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.514 "hdgst": false, 00:15:04.514 "ddgst": false 00:15:04.514 }, 00:15:04.514 "method": "bdev_nvme_attach_controller" 00:15:04.514 }' 00:15:04.514 11:01:01 -- nvmf/common.sh@546 -- # IFS=, 00:15:04.514 11:01:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:04.514 "params": { 00:15:04.514 "name": "Nvme1", 00:15:04.514 "trtype": "tcp", 00:15:04.514 "traddr": "10.0.0.2", 00:15:04.514 "adrfam": "ipv4", 00:15:04.514 "trsvcid": "4420", 00:15:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.514 "hdgst": false, 00:15:04.514 "ddgst": false 00:15:04.515 }, 00:15:04.515 "method": "bdev_nvme_attach_controller" 00:15:04.515 }' 00:15:04.515 [2024-05-15 11:01:01.064891] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:04.515 [2024-05-15 11:01:01.064938] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:04.515 [2024-05-15 11:01:01.070893] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:04.515 [2024-05-15 11:01:01.070962] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:04.515 [2024-05-15 11:01:01.074166] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:04.515 [2024-05-15 11:01:01.074226] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:04.515 [2024-05-15 11:01:01.074704] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:04.515 [2024-05-15 11:01:01.074764] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:04.515 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.515 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.776 [2024-05-15 11:01:01.194075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.776 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.776 [2024-05-15 11:01:01.244032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:04.776 [2024-05-15 11:01:01.246010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.776 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.776 [2024-05-15 11:01:01.297753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:04.776 [2024-05-15 11:01:01.308691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.776 [2024-05-15 11:01:01.344997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.776 [2024-05-15 11:01:01.357891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:04.776 [2024-05-15 11:01:01.393669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:05.035 Running I/O for 1 seconds... 00:15:05.035 Running I/O for 1 seconds... 00:15:05.035 Running I/O for 1 seconds... 00:15:05.035 Running I/O for 1 seconds... 00:15:05.975 00:15:05.975 Latency(us) 00:15:05.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.975 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:05.975 Nvme1n1 : 1.00 14486.16 56.59 0.00 0.00 8811.62 4614.83 14854.83 00:15:05.975 =================================================================================================================== 00:15:05.975 Total : 14486.16 56.59 0.00 0.00 8811.62 4614.83 14854.83 00:15:05.975 00:15:05.975 Latency(us) 00:15:05.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.975 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:05.975 Nvme1n1 : 1.01 12194.47 47.63 0.00 0.00 10462.46 5133.65 22828.37 00:15:05.975 =================================================================================================================== 00:15:05.975 Total : 12194.47 47.63 0.00 0.00 10462.46 5133.65 22828.37 00:15:05.975 00:15:05.975 Latency(us) 00:15:05.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.975 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:05.975 Nvme1n1 : 1.00 16340.86 63.83 0.00 0.00 7811.45 4505.60 15837.87 00:15:05.975 =================================================================================================================== 00:15:05.975 Total : 16340.86 63.83 0.00 0.00 7811.45 4505.60 15837.87 00:15:06.235 00:15:06.235 Latency(us) 00:15:06.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.235 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:06.235 Nvme1n1 : 1.00 195326.18 762.99 0.00 0.00 652.75 262.83 730.45 00:15:06.235 =================================================================================================================== 00:15:06.235 Total : 195326.18 762.99 0.00 0.00 652.75 262.83 730.45 00:15:06.235 11:01:02 -- target/bdev_io_wait.sh@38 -- # wait 299215 00:15:06.235 11:01:02 -- target/bdev_io_wait.sh@39 -- # wait 299217 00:15:06.235 11:01:02 -- target/bdev_io_wait.sh@40 -- # wait 299220 00:15:06.235 11:01:02 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.235 11:01:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.235 11:01:02 -- common/autotest_common.sh@10 -- # set +x 00:15:06.235 11:01:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.235 11:01:02 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:06.235 11:01:02 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:06.235 11:01:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:06.235 11:01:02 -- nvmf/common.sh@117 -- # sync 00:15:06.235 11:01:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.235 11:01:02 -- nvmf/common.sh@120 -- # set +e 00:15:06.235 11:01:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.235 11:01:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.235 rmmod nvme_tcp 00:15:06.235 rmmod nvme_fabrics 00:15:06.494 rmmod nvme_keyring 00:15:06.494 11:01:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.494 11:01:02 -- nvmf/common.sh@124 -- # set -e 00:15:06.494 11:01:02 -- nvmf/common.sh@125 -- # return 0 00:15:06.494 11:01:02 -- nvmf/common.sh@478 -- # '[' -n 298930 ']' 00:15:06.494 11:01:02 -- nvmf/common.sh@479 -- # killprocess 298930 00:15:06.494 11:01:02 -- common/autotest_common.sh@946 -- # '[' -z 298930 ']' 00:15:06.494 11:01:02 -- common/autotest_common.sh@950 -- # kill -0 298930 00:15:06.494 11:01:02 -- common/autotest_common.sh@951 -- # uname 00:15:06.494 11:01:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:06.494 11:01:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 298930 00:15:06.494 11:01:02 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:06.494 11:01:02 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:06.494 11:01:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 298930' 00:15:06.494 killing process with pid 298930 00:15:06.494 11:01:02 -- common/autotest_common.sh@965 -- # kill 298930 00:15:06.494 [2024-05-15 11:01:02.975401] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:06.494 11:01:02 -- common/autotest_common.sh@970 -- # wait 298930 00:15:06.494 11:01:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:06.494 11:01:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:06.494 11:01:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:06.494 11:01:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.494 11:01:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.494 11:01:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.495 11:01:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.495 11:01:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.040 11:01:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.040 00:15:09.041 real 0m12.266s 00:15:09.041 user 0m19.101s 00:15:09.041 sys 0m6.578s 00:15:09.041 11:01:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:09.041 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 ************************************ 00:15:09.041 END TEST nvmf_bdev_io_wait 00:15:09.041 ************************************ 00:15:09.041 11:01:05 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:09.041 11:01:05 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:09.041 11:01:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.041 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 ************************************ 00:15:09.041 START TEST nvmf_queue_depth 00:15:09.041 ************************************ 00:15:09.041 11:01:05 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:09.041 * Looking for test storage... 00:15:09.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.041 11:01:05 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.041 11:01:05 -- nvmf/common.sh@7 -- # uname -s 00:15:09.041 11:01:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.041 11:01:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.041 11:01:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.041 11:01:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.041 11:01:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.041 11:01:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.041 11:01:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.041 11:01:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.041 11:01:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.041 11:01:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.041 11:01:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.041 11:01:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.041 11:01:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.041 11:01:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.041 11:01:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.041 11:01:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.041 11:01:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.041 11:01:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.041 11:01:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.041 11:01:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.041 11:01:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.041 11:01:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.041 11:01:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.041 11:01:05 -- paths/export.sh@5 -- # export PATH 00:15:09.041 11:01:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.041 11:01:05 -- nvmf/common.sh@47 -- # : 0 00:15:09.041 11:01:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.041 11:01:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.041 11:01:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.041 11:01:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.041 11:01:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.041 11:01:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.041 11:01:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.041 11:01:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.041 11:01:05 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:09.041 11:01:05 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:09.041 11:01:05 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:09.041 11:01:05 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:09.041 11:01:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:09.041 11:01:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.041 11:01:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:09.041 11:01:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:09.041 11:01:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:09.041 11:01:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.041 11:01:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.041 11:01:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.041 11:01:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:09.041 11:01:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:09.041 11:01:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.041 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:15.633 11:01:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:15.633 11:01:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.633 11:01:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.633 11:01:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.633 11:01:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.633 11:01:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.633 11:01:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.633 11:01:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.633 11:01:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.633 11:01:12 -- nvmf/common.sh@296 -- # e810=() 00:15:15.633 11:01:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.633 11:01:12 -- nvmf/common.sh@297 -- # x722=() 00:15:15.633 11:01:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.633 11:01:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:15.633 11:01:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.633 11:01:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.633 11:01:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.633 11:01:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.633 11:01:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.633 11:01:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.633 11:01:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:15.633 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:15.633 11:01:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.633 11:01:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:15.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:15.633 11:01:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.633 11:01:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.633 11:01:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.633 11:01:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.633 11:01:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:15.633 11:01:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.633 11:01:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:15.633 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:15.633 11:01:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.633 11:01:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.633 11:01:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.633 11:01:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:15.633 11:01:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.633 11:01:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:15.633 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:15.633 11:01:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.633 11:01:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:15.633 11:01:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:15.633 11:01:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:15.634 11:01:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:15.634 11:01:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:15.634 11:01:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.634 11:01:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.634 11:01:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.634 11:01:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.634 11:01:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.634 11:01:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.634 11:01:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.634 11:01:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.634 11:01:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.634 11:01:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.634 11:01:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.634 11:01:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.634 11:01:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.634 11:01:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.634 11:01:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.634 11:01:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.634 11:01:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.895 11:01:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.895 11:01:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.895 11:01:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:15:15.895 00:15:15.895 --- 10.0.0.2 ping statistics --- 00:15:15.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.895 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:15:15.895 11:01:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:15:15.895 00:15:15.895 --- 10.0.0.1 ping statistics --- 00:15:15.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.895 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:15:15.895 11:01:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.895 11:01:12 -- nvmf/common.sh@411 -- # return 0 00:15:15.895 11:01:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:15.895 11:01:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.895 11:01:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:15.895 11:01:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:15.895 11:01:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.895 11:01:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:15.895 11:01:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:15.895 11:01:12 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:15.895 11:01:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:15.895 11:01:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:15.895 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:15:15.895 11:01:12 -- nvmf/common.sh@470 -- # nvmfpid=303787 00:15:15.895 11:01:12 -- nvmf/common.sh@471 -- # waitforlisten 303787 00:15:15.895 11:01:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.895 11:01:12 -- common/autotest_common.sh@827 -- # '[' -z 303787 ']' 00:15:15.895 11:01:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.895 11:01:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.895 11:01:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.895 11:01:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.896 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:15:15.896 [2024-05-15 11:01:12.498555] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:15.896 [2024-05-15 11:01:12.498626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.896 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.157 [2024-05-15 11:01:12.585769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.157 [2024-05-15 11:01:12.677625] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.157 [2024-05-15 11:01:12.677682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.157 [2024-05-15 11:01:12.677696] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.157 [2024-05-15 11:01:12.677702] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.157 [2024-05-15 11:01:12.677709] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.157 [2024-05-15 11:01:12.677740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.728 11:01:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.728 11:01:13 -- common/autotest_common.sh@860 -- # return 0 00:15:16.728 11:01:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:16.728 11:01:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:16.728 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.728 11:01:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.728 11:01:13 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:16.728 11:01:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.728 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.728 [2024-05-15 11:01:13.325366] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.728 11:01:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.728 11:01:13 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:16.728 11:01:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.728 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.728 Malloc0 00:15:16.728 11:01:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.728 11:01:13 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:16.728 11:01:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.728 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.728 11:01:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.728 11:01:13 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:16.728 11:01:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.728 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.728 11:01:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.728 11:01:13 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.728 11:01:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.728 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.989 [2024-05-15 11:01:13.383770] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:16.989 [2024-05-15 11:01:13.383982] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.989 11:01:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.989 11:01:13 -- target/queue_depth.sh@30 -- # bdevperf_pid=303924 00:15:16.989 11:01:13 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:16.989 11:01:13 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:16.989 11:01:13 -- target/queue_depth.sh@33 -- # waitforlisten 303924 /var/tmp/bdevperf.sock 00:15:16.989 11:01:13 -- common/autotest_common.sh@827 -- # '[' -z 303924 ']' 00:15:16.989 11:01:13 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.989 11:01:13 -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.989 11:01:13 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.989 11:01:13 -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.989 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.989 [2024-05-15 11:01:13.434049] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:16.989 [2024-05-15 11:01:13.434100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303924 ] 00:15:16.989 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.989 [2024-05-15 11:01:13.494220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.989 [2024-05-15 11:01:13.562984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.560 11:01:14 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:17.560 11:01:14 -- common/autotest_common.sh@860 -- # return 0 00:15:17.560 11:01:14 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.560 11:01:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.560 11:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.820 NVMe0n1 00:15:17.820 11:01:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.820 11:01:14 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.820 Running I/O for 10 seconds... 00:15:30.051 00:15:30.051 Latency(us) 00:15:30.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.051 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:30.051 Verification LBA range: start 0x0 length 0x4000 00:15:30.051 NVMe0n1 : 10.07 11165.99 43.62 0.00 0.00 91373.95 24576.00 70341.97 00:15:30.051 =================================================================================================================== 00:15:30.051 Total : 11165.99 43.62 0.00 0.00 91373.95 24576.00 70341.97 00:15:30.051 0 00:15:30.051 11:01:24 -- target/queue_depth.sh@39 -- # killprocess 303924 00:15:30.051 11:01:24 -- common/autotest_common.sh@946 -- # '[' -z 303924 ']' 00:15:30.051 11:01:24 -- common/autotest_common.sh@950 -- # kill -0 303924 00:15:30.051 11:01:24 -- common/autotest_common.sh@951 -- # uname 00:15:30.051 11:01:24 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:30.051 11:01:24 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303924 00:15:30.051 11:01:24 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:30.051 11:01:24 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:30.051 11:01:24 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303924' 00:15:30.051 killing process with pid 303924 00:15:30.051 11:01:24 -- common/autotest_common.sh@965 -- # kill 303924 00:15:30.051 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.051 00:15:30.051 Latency(us) 00:15:30.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.051 =================================================================================================================== 00:15:30.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.051 11:01:24 -- common/autotest_common.sh@970 -- # wait 303924 00:15:30.051 11:01:24 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:30.051 11:01:24 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:30.051 11:01:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:30.051 11:01:24 -- nvmf/common.sh@117 -- # sync 00:15:30.051 11:01:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.051 11:01:24 -- nvmf/common.sh@120 -- # set +e 00:15:30.051 11:01:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.051 11:01:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.051 rmmod nvme_tcp 00:15:30.051 rmmod nvme_fabrics 00:15:30.051 rmmod nvme_keyring 00:15:30.051 11:01:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.051 11:01:24 -- nvmf/common.sh@124 -- # set -e 00:15:30.051 11:01:24 -- nvmf/common.sh@125 -- # return 0 00:15:30.051 11:01:24 -- nvmf/common.sh@478 -- # '[' -n 303787 ']' 00:15:30.051 11:01:24 -- nvmf/common.sh@479 -- # killprocess 303787 00:15:30.051 11:01:24 -- common/autotest_common.sh@946 -- # '[' -z 303787 ']' 00:15:30.051 11:01:24 -- common/autotest_common.sh@950 -- # kill -0 303787 00:15:30.051 11:01:24 -- common/autotest_common.sh@951 -- # uname 00:15:30.051 11:01:24 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:30.051 11:01:24 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303787 00:15:30.051 11:01:24 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:30.051 11:01:24 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:30.051 11:01:24 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303787' 00:15:30.051 killing process with pid 303787 00:15:30.051 11:01:24 -- common/autotest_common.sh@965 -- # kill 303787 00:15:30.051 [2024-05-15 11:01:24.893976] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:30.051 11:01:24 -- common/autotest_common.sh@970 -- # wait 303787 00:15:30.051 11:01:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:30.051 11:01:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:30.051 11:01:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:30.051 11:01:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.051 11:01:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.051 11:01:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.051 11:01:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.051 11:01:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.623 11:01:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.623 00:15:30.623 real 0m21.820s 00:15:30.623 user 0m25.118s 00:15:30.623 sys 0m6.597s 00:15:30.623 11:01:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:30.623 11:01:27 -- common/autotest_common.sh@10 -- # set +x 00:15:30.623 ************************************ 00:15:30.623 END TEST nvmf_queue_depth 00:15:30.623 ************************************ 00:15:30.623 11:01:27 -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:30.623 11:01:27 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:30.623 11:01:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:30.623 11:01:27 -- common/autotest_common.sh@10 -- # set +x 00:15:30.623 ************************************ 00:15:30.623 START TEST nvmf_target_multipath 00:15:30.623 ************************************ 00:15:30.623 11:01:27 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:30.623 * Looking for test storage... 00:15:30.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.623 11:01:27 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.623 11:01:27 -- nvmf/common.sh@7 -- # uname -s 00:15:30.885 11:01:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.885 11:01:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.885 11:01:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.885 11:01:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.885 11:01:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.885 11:01:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.885 11:01:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.885 11:01:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.885 11:01:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.885 11:01:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.885 11:01:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.885 11:01:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.885 11:01:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.885 11:01:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.885 11:01:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.885 11:01:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.885 11:01:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.885 11:01:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.885 11:01:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.885 11:01:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.885 11:01:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.885 11:01:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.885 11:01:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.885 11:01:27 -- paths/export.sh@5 -- # export PATH 00:15:30.885 11:01:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.885 11:01:27 -- nvmf/common.sh@47 -- # : 0 00:15:30.885 11:01:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.885 11:01:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.885 11:01:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.885 11:01:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.885 11:01:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.885 11:01:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.885 11:01:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.885 11:01:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.885 11:01:27 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.885 11:01:27 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.885 11:01:27 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:30.885 11:01:27 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.885 11:01:27 -- target/multipath.sh@43 -- # nvmftestinit 00:15:30.885 11:01:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:30.885 11:01:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.885 11:01:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:30.885 11:01:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:30.885 11:01:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:30.885 11:01:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.885 11:01:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.885 11:01:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.885 11:01:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:30.885 11:01:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:30.885 11:01:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.885 11:01:27 -- common/autotest_common.sh@10 -- # set +x 00:15:39.028 11:01:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:39.028 11:01:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.028 11:01:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.028 11:01:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.028 11:01:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.028 11:01:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.028 11:01:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.028 11:01:34 -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.028 11:01:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.028 11:01:34 -- nvmf/common.sh@296 -- # e810=() 00:15:39.028 11:01:34 -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.028 11:01:34 -- nvmf/common.sh@297 -- # x722=() 00:15:39.028 11:01:34 -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.028 11:01:34 -- nvmf/common.sh@298 -- # mlx=() 00:15:39.028 11:01:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.028 11:01:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.028 11:01:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.028 11:01:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.028 11:01:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.028 11:01:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.028 11:01:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:39.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:39.028 11:01:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.028 11:01:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:39.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:39.028 11:01:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.028 11:01:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.028 11:01:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.028 11:01:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:39.028 11:01:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.028 11:01:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:39.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:39.028 11:01:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.028 11:01:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.028 11:01:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.028 11:01:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:39.028 11:01:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.028 11:01:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:39.028 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:39.028 11:01:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.028 11:01:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:39.028 11:01:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:39.028 11:01:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:39.028 11:01:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.028 11:01:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.028 11:01:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.028 11:01:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.028 11:01:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.028 11:01:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.028 11:01:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.028 11:01:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.028 11:01:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.028 11:01:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.028 11:01:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.028 11:01:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.028 11:01:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.028 11:01:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.028 11:01:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.028 11:01:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.028 11:01:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.028 11:01:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.028 11:01:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.028 11:01:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.778 ms 00:15:39.028 00:15:39.028 --- 10.0.0.2 ping statistics --- 00:15:39.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.028 rtt min/avg/max/mdev = 0.778/0.778/0.778/0.000 ms 00:15:39.028 11:01:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:15:39.028 00:15:39.028 --- 10.0.0.1 ping statistics --- 00:15:39.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.028 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:15:39.028 11:01:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.028 11:01:34 -- nvmf/common.sh@411 -- # return 0 00:15:39.028 11:01:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:39.028 11:01:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.028 11:01:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.028 11:01:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:39.028 11:01:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:39.028 11:01:34 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:39.028 11:01:34 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:39.028 only one NIC for nvmf test 00:15:39.028 11:01:34 -- target/multipath.sh@47 -- # nvmftestfini 00:15:39.028 11:01:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:39.028 11:01:34 -- nvmf/common.sh@117 -- # sync 00:15:39.028 11:01:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.028 11:01:34 -- nvmf/common.sh@120 -- # set +e 00:15:39.028 11:01:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.028 11:01:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.028 rmmod nvme_tcp 00:15:39.028 rmmod nvme_fabrics 00:15:39.028 rmmod nvme_keyring 00:15:39.028 11:01:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.028 11:01:34 -- nvmf/common.sh@124 -- # set -e 00:15:39.028 11:01:34 -- nvmf/common.sh@125 -- # return 0 00:15:39.028 11:01:34 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:39.028 11:01:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:39.028 11:01:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:39.028 11:01:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.028 11:01:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.028 11:01:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.028 11:01:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.028 11:01:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.431 11:01:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.431 11:01:36 -- target/multipath.sh@48 -- # exit 0 00:15:40.431 11:01:36 -- target/multipath.sh@1 -- # nvmftestfini 00:15:40.431 11:01:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:40.431 11:01:36 -- nvmf/common.sh@117 -- # sync 00:15:40.431 11:01:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.431 11:01:36 -- nvmf/common.sh@120 -- # set +e 00:15:40.431 11:01:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.431 11:01:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.431 11:01:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.431 11:01:36 -- nvmf/common.sh@124 -- # set -e 00:15:40.431 11:01:36 -- nvmf/common.sh@125 -- # return 0 00:15:40.431 11:01:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:40.431 11:01:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:40.431 11:01:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:40.431 11:01:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:40.431 11:01:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.431 11:01:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:40.431 11:01:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.431 11:01:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.431 11:01:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.431 11:01:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.431 00:15:40.431 real 0m9.523s 00:15:40.431 user 0m2.031s 00:15:40.431 sys 0m5.398s 00:15:40.431 11:01:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:40.431 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:40.431 ************************************ 00:15:40.431 END TEST nvmf_target_multipath 00:15:40.431 ************************************ 00:15:40.431 11:01:36 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:40.431 11:01:36 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:40.431 11:01:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:40.431 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:40.431 ************************************ 00:15:40.431 START TEST nvmf_zcopy 00:15:40.431 ************************************ 00:15:40.431 11:01:36 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:40.431 * Looking for test storage... 00:15:40.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.431 11:01:36 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.431 11:01:36 -- nvmf/common.sh@7 -- # uname -s 00:15:40.431 11:01:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.431 11:01:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.431 11:01:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.431 11:01:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.431 11:01:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.431 11:01:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.431 11:01:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.431 11:01:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.431 11:01:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.431 11:01:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.431 11:01:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:40.431 11:01:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:40.431 11:01:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.431 11:01:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.431 11:01:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.431 11:01:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.431 11:01:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.431 11:01:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.431 11:01:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.431 11:01:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.431 11:01:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.431 11:01:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.431 11:01:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.431 11:01:36 -- paths/export.sh@5 -- # export PATH 00:15:40.431 11:01:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.431 11:01:36 -- nvmf/common.sh@47 -- # : 0 00:15:40.431 11:01:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.431 11:01:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.431 11:01:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.431 11:01:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.431 11:01:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.431 11:01:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.431 11:01:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.431 11:01:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.431 11:01:36 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:40.431 11:01:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:40.431 11:01:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.431 11:01:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:40.431 11:01:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:40.431 11:01:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:40.431 11:01:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.431 11:01:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.431 11:01:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.431 11:01:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:40.431 11:01:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:40.431 11:01:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.431 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:47.017 11:01:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:47.017 11:01:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.017 11:01:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.017 11:01:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.017 11:01:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.017 11:01:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.017 11:01:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.017 11:01:43 -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.017 11:01:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.017 11:01:43 -- nvmf/common.sh@296 -- # e810=() 00:15:47.017 11:01:43 -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.017 11:01:43 -- nvmf/common.sh@297 -- # x722=() 00:15:47.017 11:01:43 -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.017 11:01:43 -- nvmf/common.sh@298 -- # mlx=() 00:15:47.017 11:01:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.017 11:01:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.017 11:01:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.017 11:01:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.017 11:01:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.017 11:01:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.018 11:01:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.018 11:01:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.018 11:01:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.018 11:01:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.018 11:01:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.018 11:01:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.018 11:01:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.018 11:01:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:47.018 11:01:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.018 11:01:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.018 11:01:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:47.018 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:47.018 11:01:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.018 11:01:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:47.018 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:47.018 11:01:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.018 11:01:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:47.018 11:01:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.018 11:01:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.018 11:01:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:47.281 11:01:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.281 11:01:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:47.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:47.281 11:01:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.281 11:01:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.281 11:01:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.281 11:01:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:47.281 11:01:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.281 11:01:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:47.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:47.281 11:01:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.281 11:01:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:47.281 11:01:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:47.281 11:01:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:47.281 11:01:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:47.281 11:01:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:47.281 11:01:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.281 11:01:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.281 11:01:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.281 11:01:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.281 11:01:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.281 11:01:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.281 11:01:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.281 11:01:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.281 11:01:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.281 11:01:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.281 11:01:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.281 11:01:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.281 11:01:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.281 11:01:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.281 11:01:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.281 11:01:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.281 11:01:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.281 11:01:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.281 11:01:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.542 11:01:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:15:47.542 00:15:47.542 --- 10.0.0.2 ping statistics --- 00:15:47.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.542 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:15:47.542 11:01:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:15:47.542 00:15:47.542 --- 10.0.0.1 ping statistics --- 00:15:47.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.542 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:15:47.542 11:01:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.542 11:01:43 -- nvmf/common.sh@411 -- # return 0 00:15:47.542 11:01:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:47.542 11:01:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.542 11:01:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:47.542 11:01:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:47.542 11:01:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.542 11:01:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:47.542 11:01:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:47.542 11:01:44 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:47.542 11:01:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:47.542 11:01:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:47.542 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 11:01:44 -- nvmf/common.sh@470 -- # nvmfpid=314568 00:15:47.542 11:01:44 -- nvmf/common.sh@471 -- # waitforlisten 314568 00:15:47.542 11:01:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.542 11:01:44 -- common/autotest_common.sh@827 -- # '[' -z 314568 ']' 00:15:47.542 11:01:44 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.542 11:01:44 -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:47.542 11:01:44 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.542 11:01:44 -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:47.542 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 [2024-05-15 11:01:44.060327] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:47.542 [2024-05-15 11:01:44.060385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.542 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.542 [2024-05-15 11:01:44.140740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.803 [2024-05-15 11:01:44.216033] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.803 [2024-05-15 11:01:44.216087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.803 [2024-05-15 11:01:44.216095] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.803 [2024-05-15 11:01:44.216102] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.803 [2024-05-15 11:01:44.216108] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.803 [2024-05-15 11:01:44.216130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.374 11:01:44 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:48.374 11:01:44 -- common/autotest_common.sh@860 -- # return 0 00:15:48.374 11:01:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:48.374 11:01:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.374 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.374 11:01:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.374 11:01:44 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:48.374 11:01:44 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:48.374 11:01:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.374 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 [2024-05-15 11:01:44.881475] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.375 11:01:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.375 11:01:44 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:48.375 11:01:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.375 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 11:01:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.375 11:01:44 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.375 11:01:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.375 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 [2024-05-15 11:01:44.897446] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:48.375 [2024-05-15 11:01:44.897735] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.375 11:01:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.375 11:01:44 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:48.375 11:01:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.375 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 11:01:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.375 11:01:44 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:48.375 11:01:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.375 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 malloc0 00:15:48.375 11:01:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.375 11:01:44 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:48.375 11:01:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.375 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.375 11:01:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.375 11:01:44 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:48.375 11:01:44 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:48.375 11:01:44 -- nvmf/common.sh@521 -- # config=() 00:15:48.375 11:01:44 -- nvmf/common.sh@521 -- # local subsystem config 00:15:48.375 11:01:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:48.375 11:01:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:48.375 { 00:15:48.375 "params": { 00:15:48.375 "name": "Nvme$subsystem", 00:15:48.375 "trtype": "$TEST_TRANSPORT", 00:15:48.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.375 "adrfam": "ipv4", 00:15:48.375 "trsvcid": "$NVMF_PORT", 00:15:48.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.375 "hdgst": ${hdgst:-false}, 00:15:48.375 "ddgst": ${ddgst:-false} 00:15:48.375 }, 00:15:48.375 "method": "bdev_nvme_attach_controller" 00:15:48.375 } 00:15:48.375 EOF 00:15:48.375 )") 00:15:48.375 11:01:44 -- nvmf/common.sh@543 -- # cat 00:15:48.375 11:01:44 -- nvmf/common.sh@545 -- # jq . 00:15:48.375 11:01:44 -- nvmf/common.sh@546 -- # IFS=, 00:15:48.375 11:01:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:48.375 "params": { 00:15:48.375 "name": "Nvme1", 00:15:48.375 "trtype": "tcp", 00:15:48.375 "traddr": "10.0.0.2", 00:15:48.375 "adrfam": "ipv4", 00:15:48.375 "trsvcid": "4420", 00:15:48.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.375 "hdgst": false, 00:15:48.375 "ddgst": false 00:15:48.375 }, 00:15:48.375 "method": "bdev_nvme_attach_controller" 00:15:48.375 }' 00:15:48.375 [2024-05-15 11:01:44.982598] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:48.375 [2024-05-15 11:01:44.982662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314600 ] 00:15:48.375 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.635 [2024-05-15 11:01:45.047453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.635 [2024-05-15 11:01:45.121839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.895 Running I/O for 10 seconds... 00:15:58.892 00:15:58.892 Latency(us) 00:15:58.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.892 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:58.892 Verification LBA range: start 0x0 length 0x1000 00:15:58.892 Nvme1n1 : 10.01 9594.24 74.96 0.00 0.00 13289.61 1187.84 28398.93 00:15:58.892 =================================================================================================================== 00:15:58.892 Total : 9594.24 74.96 0.00 0.00 13289.61 1187.84 28398.93 00:15:59.152 11:01:55 -- target/zcopy.sh@39 -- # perfpid=316777 00:15:59.152 11:01:55 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:59.152 11:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:59.152 11:01:55 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:59.152 11:01:55 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:59.152 11:01:55 -- nvmf/common.sh@521 -- # config=() 00:15:59.152 11:01:55 -- nvmf/common.sh@521 -- # local subsystem config 00:15:59.152 11:01:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:59.152 11:01:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:59.152 { 00:15:59.152 "params": { 00:15:59.152 "name": "Nvme$subsystem", 00:15:59.152 "trtype": "$TEST_TRANSPORT", 00:15:59.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.152 "adrfam": "ipv4", 00:15:59.152 "trsvcid": "$NVMF_PORT", 00:15:59.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.152 "hdgst": ${hdgst:-false}, 00:15:59.152 "ddgst": ${ddgst:-false} 00:15:59.152 }, 00:15:59.152 "method": "bdev_nvme_attach_controller" 00:15:59.152 } 00:15:59.152 EOF 00:15:59.152 )") 00:15:59.152 [2024-05-15 11:01:55.595633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.152 [2024-05-15 11:01:55.595664] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.152 11:01:55 -- nvmf/common.sh@543 -- # cat 00:15:59.152 11:01:55 -- nvmf/common.sh@545 -- # jq . 00:15:59.152 [2024-05-15 11:01:55.603616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.152 [2024-05-15 11:01:55.603627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.152 11:01:55 -- nvmf/common.sh@546 -- # IFS=, 00:15:59.152 11:01:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:59.152 "params": { 00:15:59.152 "name": "Nvme1", 00:15:59.152 "trtype": "tcp", 00:15:59.152 "traddr": "10.0.0.2", 00:15:59.152 "adrfam": "ipv4", 00:15:59.152 "trsvcid": "4420", 00:15:59.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.152 "hdgst": false, 00:15:59.152 "ddgst": false 00:15:59.152 }, 00:15:59.152 "method": "bdev_nvme_attach_controller" 00:15:59.152 }' 00:15:59.152 [2024-05-15 11:01:55.611634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.152 [2024-05-15 11:01:55.611642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.152 [2024-05-15 11:01:55.619654] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.619662] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.627676] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.627684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.635697] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.635705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.638099] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:15:59.153 [2024-05-15 11:01:55.638146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316777 ] 00:15:59.153 [2024-05-15 11:01:55.643717] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.643725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.651737] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.651745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.659757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.659765] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.667779] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.667787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.153 [2024-05-15 11:01:55.675798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.675807] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.683818] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.683826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.691839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.691847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.699858] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.699866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.704316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.153 [2024-05-15 11:01:55.707879] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.707887] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.715898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.715906] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.723919] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.723927] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.731939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.731947] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.739960] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.739971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.747980] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.747991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.756001] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.756010] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.764021] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.764029] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.768525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.153 [2024-05-15 11:01:55.772043] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.772051] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.780064] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.780075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.788087] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.788099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.796107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.796116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.153 [2024-05-15 11:01:55.804124] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.153 [2024-05-15 11:01:55.804132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.812146] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.812156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.820167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.820176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.828187] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.828195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.836208] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.836217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.844235] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.844248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.852253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.852263] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.860273] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.860282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.868296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.868306] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.876316] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.876323] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.884334] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.884342] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.892354] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.892361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.900375] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.900382] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.908398] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.908406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.916419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.916427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.924439] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.924448] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.932458] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.932466] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.942009] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.942023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.948502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.948513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 Running I/O for 5 seconds... 00:15:59.414 [2024-05-15 11:01:55.956521] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.956528] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.967716] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.967733] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.975347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.975363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.984184] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.984200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:55.992548] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:55.992565] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.001632] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.001647] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.010476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.010491] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.019477] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.019492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.027660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.027675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.036322] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.036337] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.044694] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.044709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.053061] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.053076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.414 [2024-05-15 11:01:56.061709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.414 [2024-05-15 11:01:56.061724] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.070317] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.070333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.079171] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.079186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.088096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.088114] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.096528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.096543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.105139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.105153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.113791] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.113806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.122165] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.122180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.130695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.130710] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.139704] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.139718] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.148483] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.148497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.157470] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.157484] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.166423] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.166437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.174107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.174121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.183238] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.183252] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.191648] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.191670] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.200747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.200762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.209719] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.209734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.218630] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.218644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.227292] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.227307] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.235906] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.235921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.244828] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.244842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.253559] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.253577] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.262246] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.262260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.270913] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.270927] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.279792] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.279806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.288215] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.288229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.297167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.297182] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.306052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.306066] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.315120] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.315134] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.676 [2024-05-15 11:01:56.323315] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.676 [2024-05-15 11:01:56.323329] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.331929] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.331944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.340805] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.340819] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.349320] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.349334] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.357897] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.357911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.366671] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.366686] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.375405] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.375419] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.384344] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.384358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.392623] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.392637] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.401837] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.401852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.938 [2024-05-15 11:01:56.410617] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.938 [2024-05-15 11:01:56.410631] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.419190] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.419208] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.428112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.428126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.436746] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.436760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.445650] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.445664] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.453970] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.453984] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.462777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.462792] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.471664] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.471678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.480610] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.480625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.489468] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.489482] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.497938] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.497953] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.506665] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.506680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.515369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.515383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.524085] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.524098] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.532769] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.532783] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.541188] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.541202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.550244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.550259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.559068] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.559083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.567494] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.567508] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.576638] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.576652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.939 [2024-05-15 11:01:56.585681] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.939 [2024-05-15 11:01:56.585699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.594164] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.594179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.602948] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.602963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.611328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.611342] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.619727] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.619741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.628473] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.628486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.637400] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.637414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.645768] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.645782] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.654534] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.654552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.663478] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.663492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.672103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.672117] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.680867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.680881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.689721] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.689735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.698252] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.698266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.707147] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.707162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.715892] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.715907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.724253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.724267] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.732926] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.732941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.741690] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.741704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.749953] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.749967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.758846] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.758861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.767423] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.767437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.776075] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.776089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.784670] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.784684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.793486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.793500] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.802238] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.802253] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.811198] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.201 [2024-05-15 11:01:56.811213] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.201 [2024-05-15 11:01:56.819781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.202 [2024-05-15 11:01:56.819795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.202 [2024-05-15 11:01:56.828031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.202 [2024-05-15 11:01:56.828046] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.202 [2024-05-15 11:01:56.837086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.202 [2024-05-15 11:01:56.837100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.202 [2024-05-15 11:01:56.845180] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.202 [2024-05-15 11:01:56.845194] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.202 [2024-05-15 11:01:56.854049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.202 [2024-05-15 11:01:56.854063] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.863002] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.863017] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.871639] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.871653] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.880138] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.880152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.888634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.888649] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.897060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.897075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.905658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.905672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.914928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.914943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.923847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.923862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.932379] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.932393] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.941085] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.941099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.949583] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.949597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.958025] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.958040] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.966619] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.966638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.975924] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.975939] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.983605] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.983620] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:56.992603] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:56.992619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.001411] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.001426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.009497] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.009511] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.018425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.018439] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.026698] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.026712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.035466] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.035481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.044265] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.044280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.053024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.053039] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.061981] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.061996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.070815] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.070830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.079583] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.079598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.088268] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.088282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.097014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.097028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.105978] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.105993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.463 [2024-05-15 11:01:57.114453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.463 [2024-05-15 11:01:57.114467] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.123121] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.123135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.131997] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.132013] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.140554] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.140569] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.149141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.149155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.157796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.157811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.166370] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.166384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.175058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.175073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.184112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.184126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.193045] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.193059] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.201735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.201750] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.210186] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.210200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.218959] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.218974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.227690] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.227704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.236605] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.236621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.245453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.245468] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.254293] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.254308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.263176] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.263192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.271875] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.271890] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.280077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.280091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.289138] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.289153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.298312] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.298327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.306113] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.306127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.314938] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.314954] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.323733] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.323747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.332463] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.332477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.340895] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.340909] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.349732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.349747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.358261] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.358275] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.367123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.367138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.726 [2024-05-15 11:01:57.375937] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.726 [2024-05-15 11:01:57.375951] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.384684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.384699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.393476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.393492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.402369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.402387] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.411161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.411176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.419519] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.419533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.427885] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.427900] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.436610] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.436624] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.445204] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.445218] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.453692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.453706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.462294] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.462308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.470601] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.470615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.479825] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.479839] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.488253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.488268] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.497193] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.497208] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.505876] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.505891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.514833] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.514848] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.523607] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.988 [2024-05-15 11:01:57.523622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.988 [2024-05-15 11:01:57.532171] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.532186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.541070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.541085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.549471] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.549486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.558487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.558502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.566709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.566727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.575656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.575671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.584101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.584115] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.592944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.592958] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.601535] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.601553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.610523] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.610538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.619327] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.619341] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.627766] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.627781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.989 [2024-05-15 11:01:57.636657] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.989 [2024-05-15 11:01:57.636671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.249 [2024-05-15 11:01:57.645439] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.249 [2024-05-15 11:01:57.645453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.249 [2024-05-15 11:01:57.653423] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.249 [2024-05-15 11:01:57.653437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.249 [2024-05-15 11:01:57.662273] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.249 [2024-05-15 11:01:57.662287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.671062] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.671076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.679926] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.679940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.688711] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.688725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.697318] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.697332] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.705735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.705749] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.714048] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.714062] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.722599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.722613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.730979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.730996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.739746] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.739760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.748586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.748600] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.757312] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.757327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.765907] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.765921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.774158] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.774172] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.783029] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.783043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.791674] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.791689] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.800501] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.800515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.809290] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.809304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.817580] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.817595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.826240] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.826254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.835136] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.835151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.843716] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.843730] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.852063] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.852078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.860574] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.860589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.869426] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.869440] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.877781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.877795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.886786] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.886801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.250 [2024-05-15 11:01:57.895420] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.250 [2024-05-15 11:01:57.895437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.904139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.904153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.913422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.913437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.922184] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.922198] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.931046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.931060] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.940017] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.940030] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.948761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.948775] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.957445] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.957459] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.966230] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.966244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.974523] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.974538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.983141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.983155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:57.991766] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-05-15 11:01:57.991781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 [2024-05-15 11:01:58.000698] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.000713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.008367] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.008381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.017406] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.017420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.025471] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.025485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.034566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.034579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.042211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.042225] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.051691] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.051705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.060079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.060094] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.069002] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.069016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.077822] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.077836] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.086571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.086585] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.094944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.094958] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.103847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.103862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.112565] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.112579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.121672] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.121686] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.130355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.130369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.139288] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.139302] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.148181] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.148196] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 [2024-05-15 11:01:58.157004] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-05-15 11:01:58.157017] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.165366] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.165380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.174395] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.174410] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.183284] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.183298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.191652] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.191667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.200487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.200501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.209269] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.209283] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.218541] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.218560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.226924] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.226939] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.235725] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.235740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.244558] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.773 [2024-05-15 11:01:58.244572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.773 [2024-05-15 11:01:58.253505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.253519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.262305] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.262320] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.271185] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.271199] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.280023] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.280037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.288759] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.288773] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.297071] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.297085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.305553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.305568] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.314399] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.314413] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.322880] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.322893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.331780] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.331795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.340482] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.340496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.349417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.349431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.357899] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.357913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.366950] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.366965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.375397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.375412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.384419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.384433] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.393334] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.393348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.402078] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.402092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.410883] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.410897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.774 [2024-05-15 11:01:58.419942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.774 [2024-05-15 11:01:58.419956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.428886] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.428901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.436583] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.436596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.445602] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.445617] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.453599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.453613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.462216] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.462230] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.471166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.471180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.480092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.480107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.488784] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.488799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.497575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.497589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.506577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.506591] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.515492] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.515507] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.524503] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.524517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.533213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.533227] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.542049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.542064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.550936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.550950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.559155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.559170] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.567251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.567265] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.575944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.575958] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.584595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.584610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.593520] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.593535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.602525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.602539] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.611403] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.611418] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.620345] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.620359] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.629016] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.629031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.637677] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.637692] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.646679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.646694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.655498] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.655513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.664426] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.664441] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.035 [2024-05-15 11:01:58.673289] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.035 [2024-05-15 11:01:58.673304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.036 [2024-05-15 11:01:58.682093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.036 [2024-05-15 11:01:58.682108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.690425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.690440] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.699663] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.699678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.708096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.708111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.716604] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.716623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.725374] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.725389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.734170] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.734183] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.742450] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.742466] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.750873] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.750888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.759479] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.759494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.768333] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.768347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.776773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.776787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.785662] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.785677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.794846] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.794860] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.803069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.803083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.811896] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.811911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.820051] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.820065] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.828864] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.828879] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.837570] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.837585] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.846505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.846520] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.854758] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.854773] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.863318] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.863332] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.872112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.872127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.880496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.880514] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.888780] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.888795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.897475] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.897489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.905627] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.905641] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.914331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.914345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.923103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.923118] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.931954] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.931969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.940193] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.940207] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.297 [2024-05-15 11:01:58.949059] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.297 [2024-05-15 11:01:58.949073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:58.957813] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:58.957828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:58.966083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:58.966097] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:58.974714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:58.974729] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:58.983656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:58.983671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:58.992402] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:58.992417] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.000936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.000951] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.010040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.010055] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.018724] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.018739] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.027719] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.027733] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.036578] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.036593] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.045352] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.045370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.054162] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.054176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.062633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.062648] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.071485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.071500] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.080273] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.080288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.089242] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.089257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.097727] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.097741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.107076] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.107091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.115979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.115993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.125012] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.125027] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.134008] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.134023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.142816] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.142830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.151353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.151369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.160376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.160391] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.168886] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.168900] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.178081] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.178095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.187119] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.187134] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.559 [2024-05-15 11:01:59.195582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.559 [2024-05-15 11:01:59.195597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.560 [2024-05-15 11:01:59.204078] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.560 [2024-05-15 11:01:59.204092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.213102] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.213121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.222115] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.222129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.231114] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.231129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.240115] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.240129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.248988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.249002] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.257798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.257813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.265986] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.266001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.274869] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.274883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.282853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.282868] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.291365] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.291380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.300252] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.300267] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.308417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.308432] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.317512] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.317527] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.330933] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.330948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.343732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.343746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.356856] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.356870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.369994] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.370009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.383218] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.383232] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.396017] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.396031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.408845] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.408860] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.421806] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.421820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.434813] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.434827] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.447600] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.447615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.460668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.460683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.821 [2024-05-15 11:01:59.473620] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.821 [2024-05-15 11:01:59.473635] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.081 [2024-05-15 11:01:59.486180] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.081 [2024-05-15 11:01:59.486195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.081 [2024-05-15 11:01:59.498575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.081 [2024-05-15 11:01:59.498590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.081 [2024-05-15 11:01:59.511839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.081 [2024-05-15 11:01:59.511854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.081 [2024-05-15 11:01:59.524652] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.081 [2024-05-15 11:01:59.524667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.081 [2024-05-15 11:01:59.537815] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.081 [2024-05-15 11:01:59.537829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.081 [2024-05-15 11:01:59.550785] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.081 [2024-05-15 11:01:59.550799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.081 [2024-05-15 11:01:59.563626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.081 [2024-05-15 11:01:59.563640] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.576487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.576502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.589718] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.589732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.610859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.610874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.623922] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.623937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.636907] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.636922] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.650014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.650028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.662809] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.662824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.675855] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.675870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.688339] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.688354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.701571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.701586] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.714528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.714543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.082 [2024-05-15 11:01:59.727355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.082 [2024-05-15 11:01:59.727369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.342 [2024-05-15 11:01:59.740041] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.342 [2024-05-15 11:01:59.740057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.342 [2024-05-15 11:01:59.752648] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.342 [2024-05-15 11:01:59.752663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.342 [2024-05-15 11:01:59.765669] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.342 [2024-05-15 11:01:59.765684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.342 [2024-05-15 11:01:59.778389] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.342 [2024-05-15 11:01:59.778404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.342 [2024-05-15 11:01:59.790554] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.790569] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.803720] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.803734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.816706] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.816720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.829726] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.829741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.842728] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.842743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.856070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.856086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.869371] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.869387] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.882397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.882412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.895581] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.895596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.908516] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.908530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.921428] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.921442] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.934357] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.934371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.947231] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.947246] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.960038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.960053] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.973296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.973312] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.343 [2024-05-15 11:01:59.986233] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.343 [2024-05-15 11:01:59.986248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:01:59.998756] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:01:59.998771] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.011956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.011973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.024795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.024810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.032619] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.032633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.041512] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.041527] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.050322] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.050337] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.059115] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.059129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.067248] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.067262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.076314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.076328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.085343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.085358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.604 [2024-05-15 11:02:00.093758] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.604 [2024-05-15 11:02:00.093772] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.102807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.102821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.111518] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.111533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.119818] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.119832] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.128006] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.128020] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.137147] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.137162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.146213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.146228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.154903] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.154918] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.163981] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.163995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.172672] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.172687] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.181580] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.181594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.190555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.190570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.199231] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.199245] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.208191] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.208205] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.217251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.217266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.226166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.226180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.235060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.235075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.243964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.243978] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.605 [2024-05-15 11:02:00.252632] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.605 [2024-05-15 11:02:00.252648] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.261258] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.261273] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.269773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.269791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.278306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.278321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.287504] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.287518] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.296035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.296049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.304870] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.304884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.313680] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.313694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.322366] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.322381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.330980] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.330994] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.339714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.339728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.348510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.348524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.357476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.357494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.366330] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.366345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.375227] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.375241] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.384004] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.384018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.392934] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.392949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.401771] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.401786] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.410571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.410585] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.418678] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.418692] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.427499] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.427513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.436236] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.436253] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.445347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.445361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.453450] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.453464] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.462486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.462501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.471384] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.471399] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.480161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.480175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.489155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.489170] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.497866] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.497881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.506718] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.506732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.867 [2024-05-15 11:02:00.515551] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.867 [2024-05-15 11:02:00.515565] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.523898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.523913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.532487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.532502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.541343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.541358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.550250] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.550265] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.558601] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.558615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.567253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.567269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.576191] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.576206] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.585073] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.585088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.593425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.593439] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.602060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.602079] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.611013] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.611028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.619692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.619707] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.628250] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.628264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.636684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.636699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.645368] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.645383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.653486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.653501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.662383] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.662397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.670816] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.670831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.679336] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.679351] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.688084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.688099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.696754] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.696768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.704901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.704916] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.713827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.713842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.722169] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.129 [2024-05-15 11:02:00.722183] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.129 [2024-05-15 11:02:00.731265] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.130 [2024-05-15 11:02:00.731280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.130 [2024-05-15 11:02:00.739572] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.130 [2024-05-15 11:02:00.739587] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.130 [2024-05-15 11:02:00.748351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.130 [2024-05-15 11:02:00.748365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.130 [2024-05-15 11:02:00.757211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.130 [2024-05-15 11:02:00.757226] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.130 [2024-05-15 11:02:00.765653] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.130 [2024-05-15 11:02:00.765671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.130 [2024-05-15 11:02:00.773840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.130 [2024-05-15 11:02:00.773854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.782688] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.782704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.790904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.790919] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.799817] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.799831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.808031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.808045] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.817044] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.817058] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.825728] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.825743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.834737] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.834752] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.843251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.843266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.852067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.852082] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.860856] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.860871] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.869644] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.869659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.878431] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.878446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.886987] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.887002] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.895667] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.895682] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.904227] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.904242] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.913181] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.913195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.921283] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.921297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.929938] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.929952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.938715] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.938729] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.947430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.947445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.956495] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.956509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.964207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.964222] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.970540] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.970558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 00:16:04.391 Latency(us) 00:16:04.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.391 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:04.391 Nvme1n1 : 5.00 19940.16 155.78 0.00 0.00 6412.90 2471.25 16165.55 00:16:04.391 =================================================================================================================== 00:16:04.391 Total : 19940.16 155.78 0.00 0.00 6412.90 2471.25 16165.55 00:16:04.391 [2024-05-15 11:02:00.978560] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.978572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.986579] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.986590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:00.994635] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:00.994645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:01.002652] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:01.002662] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:01.010672] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.391 [2024-05-15 11:02:01.010682] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.391 [2024-05-15 11:02:01.018693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.392 [2024-05-15 11:02:01.018703] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.392 [2024-05-15 11:02:01.026711] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.392 [2024-05-15 11:02:01.026720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.392 [2024-05-15 11:02:01.034732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.392 [2024-05-15 11:02:01.034740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.392 [2024-05-15 11:02:01.042752] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.392 [2024-05-15 11:02:01.042760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 [2024-05-15 11:02:01.050772] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.652 [2024-05-15 11:02:01.050780] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 [2024-05-15 11:02:01.058794] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.652 [2024-05-15 11:02:01.058802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 [2024-05-15 11:02:01.066813] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.652 [2024-05-15 11:02:01.066822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 [2024-05-15 11:02:01.074833] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.652 [2024-05-15 11:02:01.074842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 [2024-05-15 11:02:01.082854] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.652 [2024-05-15 11:02:01.082863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 [2024-05-15 11:02:01.090874] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.652 [2024-05-15 11:02:01.090881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 [2024-05-15 11:02:01.098894] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.652 [2024-05-15 11:02:01.098902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (316777) - No such process 00:16:04.652 11:02:01 -- target/zcopy.sh@49 -- # wait 316777 00:16:04.652 11:02:01 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.652 11:02:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.652 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:16:04.652 11:02:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.652 11:02:01 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:04.652 11:02:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.652 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:16:04.652 delay0 00:16:04.652 11:02:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.652 11:02:01 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:04.652 11:02:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.652 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:16:04.652 11:02:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.652 11:02:01 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:04.652 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.652 [2024-05-15 11:02:01.236371] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:11.240 Initializing NVMe Controllers 00:16:11.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:11.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:11.240 Initialization complete. Launching workers. 00:16:11.240 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 90 00:16:11.240 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 38 00:16:11.240 success 173, unsuccess 199, failed 0 00:16:11.240 11:02:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:11.240 11:02:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:11.240 11:02:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:11.240 11:02:07 -- nvmf/common.sh@117 -- # sync 00:16:11.240 11:02:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.240 11:02:07 -- nvmf/common.sh@120 -- # set +e 00:16:11.240 11:02:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.240 11:02:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.240 rmmod nvme_tcp 00:16:11.240 rmmod nvme_fabrics 00:16:11.240 rmmod nvme_keyring 00:16:11.240 11:02:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.240 11:02:07 -- nvmf/common.sh@124 -- # set -e 00:16:11.240 11:02:07 -- nvmf/common.sh@125 -- # return 0 00:16:11.240 11:02:07 -- nvmf/common.sh@478 -- # '[' -n 314568 ']' 00:16:11.240 11:02:07 -- nvmf/common.sh@479 -- # killprocess 314568 00:16:11.240 11:02:07 -- common/autotest_common.sh@946 -- # '[' -z 314568 ']' 00:16:11.240 11:02:07 -- common/autotest_common.sh@950 -- # kill -0 314568 00:16:11.240 11:02:07 -- common/autotest_common.sh@951 -- # uname 00:16:11.240 11:02:07 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:11.240 11:02:07 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 314568 00:16:11.240 11:02:07 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:11.240 11:02:07 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:11.240 11:02:07 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 314568' 00:16:11.240 killing process with pid 314568 00:16:11.240 11:02:07 -- common/autotest_common.sh@965 -- # kill 314568 00:16:11.240 [2024-05-15 11:02:07.619442] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:11.240 11:02:07 -- common/autotest_common.sh@970 -- # wait 314568 00:16:11.240 11:02:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:11.240 11:02:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:11.240 11:02:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:11.240 11:02:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.240 11:02:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.240 11:02:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.240 11:02:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.240 11:02:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.158 11:02:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.421 00:16:13.421 real 0m33.036s 00:16:13.421 user 0m45.988s 00:16:13.421 sys 0m9.288s 00:16:13.421 11:02:09 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:13.421 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:16:13.421 ************************************ 00:16:13.421 END TEST nvmf_zcopy 00:16:13.421 ************************************ 00:16:13.421 11:02:09 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:13.421 11:02:09 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:13.421 11:02:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:13.421 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:16:13.421 ************************************ 00:16:13.421 START TEST nvmf_nmic 00:16:13.421 ************************************ 00:16:13.421 11:02:09 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:13.421 * Looking for test storage... 00:16:13.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.421 11:02:09 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.421 11:02:09 -- nvmf/common.sh@7 -- # uname -s 00:16:13.421 11:02:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.421 11:02:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.421 11:02:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.421 11:02:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.421 11:02:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.421 11:02:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.421 11:02:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.421 11:02:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.421 11:02:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.421 11:02:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.421 11:02:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.421 11:02:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.421 11:02:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.421 11:02:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.421 11:02:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.421 11:02:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.421 11:02:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.421 11:02:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.421 11:02:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.421 11:02:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.421 11:02:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.421 11:02:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.421 11:02:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.421 11:02:10 -- paths/export.sh@5 -- # export PATH 00:16:13.421 11:02:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.421 11:02:10 -- nvmf/common.sh@47 -- # : 0 00:16:13.421 11:02:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.421 11:02:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.421 11:02:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.421 11:02:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.421 11:02:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.421 11:02:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.421 11:02:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.421 11:02:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.421 11:02:10 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.421 11:02:10 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.421 11:02:10 -- target/nmic.sh@14 -- # nvmftestinit 00:16:13.421 11:02:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:13.421 11:02:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.421 11:02:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:13.421 11:02:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:13.421 11:02:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:13.421 11:02:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.421 11:02:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.421 11:02:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.421 11:02:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:13.421 11:02:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:13.421 11:02:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.421 11:02:10 -- common/autotest_common.sh@10 -- # set +x 00:16:21.569 11:02:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:21.569 11:02:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.569 11:02:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.569 11:02:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.569 11:02:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.569 11:02:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.569 11:02:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.569 11:02:16 -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.569 11:02:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.569 11:02:16 -- nvmf/common.sh@296 -- # e810=() 00:16:21.569 11:02:16 -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.569 11:02:16 -- nvmf/common.sh@297 -- # x722=() 00:16:21.569 11:02:16 -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.569 11:02:16 -- nvmf/common.sh@298 -- # mlx=() 00:16:21.569 11:02:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.569 11:02:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.569 11:02:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.569 11:02:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.569 11:02:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.569 11:02:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.569 11:02:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.569 11:02:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.569 11:02:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.569 11:02:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:21.569 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:21.569 11:02:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.569 11:02:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.569 11:02:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.569 11:02:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.569 11:02:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.570 11:02:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:21.570 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:21.570 11:02:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.570 11:02:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.570 11:02:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.570 11:02:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:21.570 11:02:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.570 11:02:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:21.570 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:21.570 11:02:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.570 11:02:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.570 11:02:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.570 11:02:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:21.570 11:02:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.570 11:02:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:21.570 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:21.570 11:02:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.570 11:02:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:21.570 11:02:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:21.570 11:02:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:21.570 11:02:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:21.570 11:02:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.570 11:02:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.570 11:02:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.570 11:02:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.570 11:02:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.570 11:02:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.570 11:02:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.570 11:02:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.570 11:02:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.570 11:02:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.570 11:02:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.570 11:02:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.570 11:02:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.570 11:02:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.570 11:02:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.570 11:02:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.570 11:02:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.570 11:02:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.570 11:02:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.570 11:02:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:16:21.570 00:16:21.570 --- 10.0.0.2 ping statistics --- 00:16:21.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.570 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:16:21.570 11:02:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:16:21.570 00:16:21.570 --- 10.0.0.1 ping statistics --- 00:16:21.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.570 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:16:21.570 11:02:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.570 11:02:17 -- nvmf/common.sh@411 -- # return 0 00:16:21.570 11:02:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:21.570 11:02:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.570 11:02:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:21.570 11:02:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:21.570 11:02:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.570 11:02:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:21.570 11:02:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:21.570 11:02:17 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:21.570 11:02:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:21.570 11:02:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:21.570 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:21.570 11:02:17 -- nvmf/common.sh@470 -- # nvmfpid=323282 00:16:21.570 11:02:17 -- nvmf/common.sh@471 -- # waitforlisten 323282 00:16:21.570 11:02:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:21.570 11:02:17 -- common/autotest_common.sh@827 -- # '[' -z 323282 ']' 00:16:21.570 11:02:17 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.570 11:02:17 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:21.570 11:02:17 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.570 11:02:17 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:21.570 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:21.570 [2024-05-15 11:02:17.140722] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:16:21.570 [2024-05-15 11:02:17.140785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.570 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.570 [2024-05-15 11:02:17.210371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.570 [2024-05-15 11:02:17.285822] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.570 [2024-05-15 11:02:17.285862] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.570 [2024-05-15 11:02:17.285870] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.570 [2024-05-15 11:02:17.285876] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.570 [2024-05-15 11:02:17.285882] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.571 [2024-05-15 11:02:17.286021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.571 [2024-05-15 11:02:17.286140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.571 [2024-05-15 11:02:17.286298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.571 [2024-05-15 11:02:17.286299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.571 11:02:17 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:21.571 11:02:17 -- common/autotest_common.sh@860 -- # return 0 00:16:21.571 11:02:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:21.571 11:02:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.571 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 11:02:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.571 11:02:17 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.571 11:02:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 [2024-05-15 11:02:17.962074] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.571 11:02:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:17 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.571 11:02:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 Malloc0 00:16:21.571 11:02:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:17 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:21.571 11:02:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 11:02:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:18 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.571 11:02:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 11:02:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:18 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.571 11:02:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 [2024-05-15 11:02:18.021251] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:21.571 [2024-05-15 11:02:18.021466] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.571 11:02:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:18 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:21.571 test case1: single bdev can't be used in multiple subsystems 00:16:21.571 11:02:18 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:21.571 11:02:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 11:02:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:18 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:21.571 11:02:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 11:02:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:18 -- target/nmic.sh@28 -- # nmic_status=0 00:16:21.571 11:02:18 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:21.571 11:02:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 [2024-05-15 11:02:18.057397] bdev.c:8011:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:21.571 [2024-05-15 11:02:18.057415] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:21.571 [2024-05-15 11:02:18.057423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.571 request: 00:16:21.571 { 00:16:21.571 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:21.571 "namespace": { 00:16:21.571 "bdev_name": "Malloc0", 00:16:21.571 "no_auto_visible": false 00:16:21.571 }, 00:16:21.571 "method": "nvmf_subsystem_add_ns", 00:16:21.571 "req_id": 1 00:16:21.571 } 00:16:21.571 Got JSON-RPC error response 00:16:21.571 response: 00:16:21.571 { 00:16:21.571 "code": -32602, 00:16:21.571 "message": "Invalid parameters" 00:16:21.571 } 00:16:21.571 11:02:18 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:21.571 11:02:18 -- target/nmic.sh@29 -- # nmic_status=1 00:16:21.571 11:02:18 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:21.571 11:02:18 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:21.571 Adding namespace failed - expected result. 00:16:21.571 11:02:18 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:21.571 test case2: host connect to nvmf target in multiple paths 00:16:21.571 11:02:18 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:21.571 11:02:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.571 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:21.571 [2024-05-15 11:02:18.069533] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:21.571 11:02:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.571 11:02:18 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.958 11:02:19 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:24.873 11:02:21 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.873 11:02:21 -- common/autotest_common.sh@1194 -- # local i=0 00:16:24.873 11:02:21 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.873 11:02:21 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:24.873 11:02:21 -- common/autotest_common.sh@1201 -- # sleep 2 00:16:26.809 11:02:23 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:26.809 11:02:23 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:26.809 11:02:23 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.809 11:02:23 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:26.809 11:02:23 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.809 11:02:23 -- common/autotest_common.sh@1204 -- # return 0 00:16:26.809 11:02:23 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:26.809 [global] 00:16:26.809 thread=1 00:16:26.809 invalidate=1 00:16:26.809 rw=write 00:16:26.809 time_based=1 00:16:26.809 runtime=1 00:16:26.809 ioengine=libaio 00:16:26.809 direct=1 00:16:26.809 bs=4096 00:16:26.809 iodepth=1 00:16:26.809 norandommap=0 00:16:26.809 numjobs=1 00:16:26.809 00:16:26.809 verify_dump=1 00:16:26.809 verify_backlog=512 00:16:26.809 verify_state_save=0 00:16:26.809 do_verify=1 00:16:26.809 verify=crc32c-intel 00:16:26.809 [job0] 00:16:26.809 filename=/dev/nvme0n1 00:16:26.809 Could not set queue depth (nvme0n1) 00:16:27.075 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.075 fio-3.35 00:16:27.075 Starting 1 thread 00:16:28.462 00:16:28.462 job0: (groupid=0, jobs=1): err= 0: pid=324781: Wed May 15 11:02:24 2024 00:16:28.462 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1029msec) 00:16:28.462 slat (nsec): min=24437, max=26006, avg=24896.44, stdev=379.42 00:16:28.462 clat (usec): min=1102, max=42002, avg=39673.61, stdev=9626.55 00:16:28.462 lat (usec): min=1127, max=42027, avg=39698.50, stdev=9626.49 00:16:28.462 clat percentiles (usec): 00:16:28.462 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41681], 20.00th=[41681], 00:16:28.462 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:28.462 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:28.462 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:28.462 | 99.99th=[42206] 00:16:28.462 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:16:28.462 slat (usec): min=9, max=24190, avg=75.43, stdev=1067.89 00:16:28.462 clat (usec): min=183, max=746, avg=531.00, stdev=103.97 00:16:28.462 lat (usec): min=217, max=24755, avg=606.43, stdev=1074.67 00:16:28.462 clat percentiles (usec): 00:16:28.462 | 1.00th=[ 289], 5.00th=[ 343], 10.00th=[ 404], 20.00th=[ 437], 00:16:28.462 | 30.00th=[ 461], 40.00th=[ 523], 50.00th=[ 545], 60.00th=[ 553], 00:16:28.462 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 676], 00:16:28.462 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 750], 99.95th=[ 750], 00:16:28.462 | 99.99th=[ 750] 00:16:28.462 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:28.462 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:28.462 lat (usec) : 250=0.75%, 500=31.89%, 750=63.96% 00:16:28.462 lat (msec) : 2=0.19%, 50=3.21% 00:16:28.462 cpu : usr=0.97%, sys=1.17%, ctx=533, majf=0, minf=1 00:16:28.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.462 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.462 00:16:28.462 Run status group 0 (all jobs): 00:16:28.462 READ: bw=70.0KiB/s (71.7kB/s), 70.0KiB/s-70.0KiB/s (71.7kB/s-71.7kB/s), io=72.0KiB (73.7kB), run=1029-1029msec 00:16:28.462 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:16:28.462 00:16:28.462 Disk stats (read/write): 00:16:28.462 nvme0n1: ios=40/512, merge=0/0, ticks=1533/259, in_queue=1792, util=98.70% 00:16:28.462 11:02:24 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:28.462 11:02:24 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.462 11:02:24 -- common/autotest_common.sh@1215 -- # local i=0 00:16:28.462 11:02:24 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:28.462 11:02:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.462 11:02:25 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:28.462 11:02:25 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.462 11:02:25 -- common/autotest_common.sh@1227 -- # return 0 00:16:28.462 11:02:25 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:28.462 11:02:25 -- target/nmic.sh@53 -- # nvmftestfini 00:16:28.462 11:02:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:28.462 11:02:25 -- nvmf/common.sh@117 -- # sync 00:16:28.462 11:02:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.462 11:02:25 -- nvmf/common.sh@120 -- # set +e 00:16:28.462 11:02:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.462 11:02:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.462 rmmod nvme_tcp 00:16:28.462 rmmod nvme_fabrics 00:16:28.462 rmmod nvme_keyring 00:16:28.462 11:02:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.462 11:02:25 -- nvmf/common.sh@124 -- # set -e 00:16:28.462 11:02:25 -- nvmf/common.sh@125 -- # return 0 00:16:28.462 11:02:25 -- nvmf/common.sh@478 -- # '[' -n 323282 ']' 00:16:28.462 11:02:25 -- nvmf/common.sh@479 -- # killprocess 323282 00:16:28.462 11:02:25 -- common/autotest_common.sh@946 -- # '[' -z 323282 ']' 00:16:28.462 11:02:25 -- common/autotest_common.sh@950 -- # kill -0 323282 00:16:28.462 11:02:25 -- common/autotest_common.sh@951 -- # uname 00:16:28.462 11:02:25 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:28.462 11:02:25 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 323282 00:16:28.724 11:02:25 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:28.724 11:02:25 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:28.724 11:02:25 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 323282' 00:16:28.724 killing process with pid 323282 00:16:28.724 11:02:25 -- common/autotest_common.sh@965 -- # kill 323282 00:16:28.724 [2024-05-15 11:02:25.160147] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:28.724 11:02:25 -- common/autotest_common.sh@970 -- # wait 323282 00:16:28.724 11:02:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:28.724 11:02:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:28.724 11:02:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:28.724 11:02:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.724 11:02:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.724 11:02:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.724 11:02:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.724 11:02:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.269 11:02:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.269 00:16:31.269 real 0m17.478s 00:16:31.269 user 0m48.691s 00:16:31.269 sys 0m6.159s 00:16:31.269 11:02:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:31.269 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:31.269 ************************************ 00:16:31.269 END TEST nvmf_nmic 00:16:31.269 ************************************ 00:16:31.269 11:02:27 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:31.269 11:02:27 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:31.269 11:02:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:31.269 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:31.269 ************************************ 00:16:31.269 START TEST nvmf_fio_target 00:16:31.269 ************************************ 00:16:31.269 11:02:27 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:31.269 * Looking for test storage... 00:16:31.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.269 11:02:27 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.269 11:02:27 -- nvmf/common.sh@7 -- # uname -s 00:16:31.269 11:02:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.269 11:02:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.269 11:02:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.269 11:02:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.269 11:02:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.269 11:02:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.269 11:02:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.269 11:02:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.269 11:02:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.269 11:02:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.269 11:02:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.269 11:02:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.269 11:02:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.269 11:02:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.269 11:02:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.269 11:02:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.269 11:02:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.269 11:02:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.269 11:02:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.269 11:02:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.269 11:02:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.269 11:02:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.269 11:02:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.269 11:02:27 -- paths/export.sh@5 -- # export PATH 00:16:31.269 11:02:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.269 11:02:27 -- nvmf/common.sh@47 -- # : 0 00:16:31.269 11:02:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.269 11:02:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.270 11:02:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.270 11:02:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.270 11:02:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.270 11:02:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.270 11:02:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.270 11:02:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.270 11:02:27 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.270 11:02:27 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.270 11:02:27 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.270 11:02:27 -- target/fio.sh@16 -- # nvmftestinit 00:16:31.270 11:02:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:31.270 11:02:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.270 11:02:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:31.270 11:02:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:31.270 11:02:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:31.270 11:02:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.270 11:02:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.270 11:02:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.270 11:02:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:31.270 11:02:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:31.270 11:02:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.270 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:37.858 11:02:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:37.858 11:02:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:37.858 11:02:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:37.858 11:02:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:37.858 11:02:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:37.858 11:02:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:37.858 11:02:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:37.858 11:02:34 -- nvmf/common.sh@295 -- # net_devs=() 00:16:37.858 11:02:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:37.858 11:02:34 -- nvmf/common.sh@296 -- # e810=() 00:16:37.858 11:02:34 -- nvmf/common.sh@296 -- # local -ga e810 00:16:37.858 11:02:34 -- nvmf/common.sh@297 -- # x722=() 00:16:37.858 11:02:34 -- nvmf/common.sh@297 -- # local -ga x722 00:16:37.858 11:02:34 -- nvmf/common.sh@298 -- # mlx=() 00:16:37.858 11:02:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:37.858 11:02:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.858 11:02:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:37.858 11:02:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:37.858 11:02:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:37.858 11:02:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.858 11:02:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:37.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:37.858 11:02:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.858 11:02:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:37.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:37.858 11:02:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:37.858 11:02:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.858 11:02:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.858 11:02:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:37.858 11:02:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.858 11:02:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:37.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:37.858 11:02:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.858 11:02:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.858 11:02:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.858 11:02:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:37.858 11:02:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.858 11:02:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:37.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:37.858 11:02:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.858 11:02:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:37.858 11:02:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:37.858 11:02:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:37.858 11:02:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:37.858 11:02:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.858 11:02:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.858 11:02:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.858 11:02:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:37.858 11:02:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.858 11:02:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.858 11:02:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:37.858 11:02:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.858 11:02:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.858 11:02:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:37.858 11:02:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:37.858 11:02:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.858 11:02:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.858 11:02:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.858 11:02:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.119 11:02:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:38.119 11:02:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.119 11:02:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.119 11:02:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.119 11:02:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:38.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:16:38.120 00:16:38.120 --- 10.0.0.2 ping statistics --- 00:16:38.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.120 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:16:38.120 11:02:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:16:38.120 00:16:38.120 --- 10.0.0.1 ping statistics --- 00:16:38.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.120 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:16:38.120 11:02:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.120 11:02:34 -- nvmf/common.sh@411 -- # return 0 00:16:38.120 11:02:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:38.120 11:02:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.120 11:02:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:38.120 11:02:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:38.120 11:02:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.120 11:02:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:38.120 11:02:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:38.120 11:02:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:38.120 11:02:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:38.120 11:02:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:38.120 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:16:38.120 11:02:34 -- nvmf/common.sh@470 -- # nvmfpid=329160 00:16:38.120 11:02:34 -- nvmf/common.sh@471 -- # waitforlisten 329160 00:16:38.120 11:02:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.120 11:02:34 -- common/autotest_common.sh@827 -- # '[' -z 329160 ']' 00:16:38.120 11:02:34 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.120 11:02:34 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:38.120 11:02:34 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.120 11:02:34 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:38.120 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:16:38.120 [2024-05-15 11:02:34.737384] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:16:38.120 [2024-05-15 11:02:34.737445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.120 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.381 [2024-05-15 11:02:34.807049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.381 [2024-05-15 11:02:34.881808] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.381 [2024-05-15 11:02:34.881859] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.381 [2024-05-15 11:02:34.881866] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.381 [2024-05-15 11:02:34.881873] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.381 [2024-05-15 11:02:34.881878] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.381 [2024-05-15 11:02:34.882019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.381 [2024-05-15 11:02:34.882148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.381 [2024-05-15 11:02:34.882304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.381 [2024-05-15 11:02:34.882306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.953 11:02:35 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:38.953 11:02:35 -- common/autotest_common.sh@860 -- # return 0 00:16:38.953 11:02:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:38.953 11:02:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.953 11:02:35 -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 11:02:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.953 11:02:35 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:39.215 [2024-05-15 11:02:35.696575] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.215 11:02:35 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.476 11:02:35 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:39.476 11:02:35 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.476 11:02:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:39.476 11:02:36 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.737 11:02:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:39.737 11:02:36 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.998 11:02:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:39.998 11:02:36 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:39.998 11:02:36 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.259 11:02:36 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:40.259 11:02:36 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.520 11:02:36 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:40.520 11:02:36 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.520 11:02:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:40.520 11:02:37 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:40.781 11:02:37 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:41.042 11:02:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:41.042 11:02:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.042 11:02:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:41.042 11:02:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.304 11:02:37 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.304 [2024-05-15 11:02:37.949737] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:41.304 [2024-05-15 11:02:37.949984] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.566 11:02:37 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:41.566 11:02:38 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:41.828 11:02:38 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.213 11:02:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:43.213 11:02:39 -- common/autotest_common.sh@1194 -- # local i=0 00:16:43.213 11:02:39 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.213 11:02:39 -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:43.213 11:02:39 -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:43.213 11:02:39 -- common/autotest_common.sh@1201 -- # sleep 2 00:16:45.759 11:02:41 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:45.759 11:02:41 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:45.759 11:02:41 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.759 11:02:41 -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:45.759 11:02:41 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.759 11:02:41 -- common/autotest_common.sh@1204 -- # return 0 00:16:45.759 11:02:41 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:45.759 [global] 00:16:45.759 thread=1 00:16:45.759 invalidate=1 00:16:45.759 rw=write 00:16:45.759 time_based=1 00:16:45.759 runtime=1 00:16:45.759 ioengine=libaio 00:16:45.759 direct=1 00:16:45.759 bs=4096 00:16:45.759 iodepth=1 00:16:45.759 norandommap=0 00:16:45.760 numjobs=1 00:16:45.760 00:16:45.760 verify_dump=1 00:16:45.760 verify_backlog=512 00:16:45.760 verify_state_save=0 00:16:45.760 do_verify=1 00:16:45.760 verify=crc32c-intel 00:16:45.760 [job0] 00:16:45.760 filename=/dev/nvme0n1 00:16:45.760 [job1] 00:16:45.760 filename=/dev/nvme0n2 00:16:45.760 [job2] 00:16:45.760 filename=/dev/nvme0n3 00:16:45.760 [job3] 00:16:45.760 filename=/dev/nvme0n4 00:16:45.760 Could not set queue depth (nvme0n1) 00:16:45.760 Could not set queue depth (nvme0n2) 00:16:45.760 Could not set queue depth (nvme0n3) 00:16:45.760 Could not set queue depth (nvme0n4) 00:16:45.760 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.760 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.760 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.760 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.760 fio-3.35 00:16:45.760 Starting 4 threads 00:16:47.158 00:16:47.158 job0: (groupid=0, jobs=1): err= 0: pid=330777: Wed May 15 11:02:43 2024 00:16:47.158 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:16:47.158 slat (nsec): min=9726, max=25660, avg=24598.56, stdev=3713.45 00:16:47.158 clat (usec): min=973, max=42993, avg=39782.78, stdev=9690.19 00:16:47.158 lat (usec): min=982, max=43018, avg=39807.38, stdev=9693.90 00:16:47.158 clat percentiles (usec): 00:16:47.158 | 1.00th=[ 971], 5.00th=[ 971], 10.00th=[41681], 20.00th=[41681], 00:16:47.158 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:47.158 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:16:47.158 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:47.158 | 99.99th=[43254] 00:16:47.158 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:16:47.158 slat (nsec): min=9108, max=56425, avg=30951.42, stdev=8957.42 00:16:47.158 clat (usec): min=189, max=933, avg=585.03, stdev=123.12 00:16:47.158 lat (usec): min=200, max=967, avg=615.98, stdev=126.24 00:16:47.158 clat percentiles (usec): 00:16:47.158 | 1.00th=[ 289], 5.00th=[ 371], 10.00th=[ 433], 20.00th=[ 482], 00:16:47.158 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:16:47.158 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 783], 00:16:47.158 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 930], 99.95th=[ 930], 00:16:47.158 | 99.99th=[ 930] 00:16:47.159 bw ( KiB/s): min= 4096, max= 4096, per=43.35%, avg=4096.00, stdev= 0.00, samples=1 00:16:47.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:47.159 lat (usec) : 250=0.38%, 500=22.83%, 750=63.96%, 1000=9.62% 00:16:47.159 lat (msec) : 50=3.21% 00:16:47.159 cpu : usr=0.97%, sys=1.93%, ctx=533, majf=0, minf=1 00:16:47.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:47.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:47.159 job1: (groupid=0, jobs=1): err= 0: pid=330782: Wed May 15 11:02:43 2024 00:16:47.159 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:47.159 slat (nsec): min=7056, max=42287, avg=24407.70, stdev=1626.68 00:16:47.159 clat (usec): min=542, max=1257, avg=974.25, stdev=69.40 00:16:47.159 lat (usec): min=566, max=1281, avg=998.65, stdev=69.41 00:16:47.159 clat percentiles (usec): 00:16:47.159 | 1.00th=[ 783], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 930], 00:16:47.159 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 988], 00:16:47.159 | 70.00th=[ 1004], 80.00th=[ 1012], 90.00th=[ 1057], 95.00th=[ 1090], 00:16:47.159 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1254], 99.95th=[ 1254], 00:16:47.159 | 99.99th=[ 1254] 00:16:47.159 write: IOPS=910, BW=3640KiB/s (3728kB/s)(3644KiB/1001msec); 0 zone resets 00:16:47.159 slat (nsec): min=9264, max=63337, avg=25609.10, stdev=10775.94 00:16:47.159 clat (usec): min=99, max=950, avg=499.91, stdev=186.88 00:16:47.159 lat (usec): min=109, max=982, avg=525.52, stdev=192.41 00:16:47.159 clat percentiles (usec): 00:16:47.159 | 1.00th=[ 117], 5.00th=[ 131], 10.00th=[ 155], 20.00th=[ 334], 00:16:47.159 | 30.00th=[ 429], 40.00th=[ 498], 50.00th=[ 553], 60.00th=[ 578], 00:16:47.159 | 70.00th=[ 611], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 742], 00:16:47.159 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 947], 99.95th=[ 947], 00:16:47.159 | 99.99th=[ 947] 00:16:47.159 bw ( KiB/s): min= 4096, max= 4096, per=43.35%, avg=4096.00, stdev= 0.00, samples=1 00:16:47.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:47.159 lat (usec) : 100=0.07%, 250=8.22%, 500=17.85%, 750=35.56%, 1000=27.20% 00:16:47.159 lat (msec) : 2=11.10% 00:16:47.159 cpu : usr=2.20%, sys=3.50%, ctx=1425, majf=0, minf=1 00:16:47.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:47.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 issued rwts: total=512,911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:47.159 job2: (groupid=0, jobs=1): err= 0: pid=330803: Wed May 15 11:02:43 2024 00:16:47.159 read: IOPS=17, BW=69.8KiB/s (71.4kB/s)(72.0KiB/1032msec) 00:16:47.159 slat (nsec): min=23693, max=24614, avg=24091.89, stdev=220.07 00:16:47.159 clat (usec): min=1142, max=42729, avg=39686.19, stdev=9623.46 00:16:47.159 lat (usec): min=1167, max=42753, avg=39710.29, stdev=9623.33 00:16:47.159 clat percentiles (usec): 00:16:47.159 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:16:47.159 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:47.159 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:47.159 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:47.159 | 99.99th=[42730] 00:16:47.159 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:16:47.159 slat (nsec): min=9383, max=66055, avg=27988.25, stdev=8627.60 00:16:47.159 clat (usec): min=282, max=903, avg=584.11, stdev=126.71 00:16:47.159 lat (usec): min=292, max=952, avg=612.10, stdev=130.45 00:16:47.159 clat percentiles (usec): 00:16:47.159 | 1.00th=[ 318], 5.00th=[ 343], 10.00th=[ 396], 20.00th=[ 482], 00:16:47.159 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 627], 00:16:47.159 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 775], 00:16:47.159 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 906], 99.95th=[ 906], 00:16:47.159 | 99.99th=[ 906] 00:16:47.159 bw ( KiB/s): min= 4096, max= 4096, per=43.35%, avg=4096.00, stdev= 0.00, samples=1 00:16:47.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:47.159 lat (usec) : 500=22.64%, 750=65.85%, 1000=8.11% 00:16:47.159 lat (msec) : 2=0.19%, 50=3.21% 00:16:47.159 cpu : usr=0.87%, sys=1.26%, ctx=530, majf=0, minf=1 00:16:47.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:47.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:47.159 job3: (groupid=0, jobs=1): err= 0: pid=330810: Wed May 15 11:02:43 2024 00:16:47.159 read: IOPS=18, BW=73.9KiB/s (75.6kB/s)(76.0KiB/1029msec) 00:16:47.159 slat (nsec): min=9590, max=25831, avg=24759.74, stdev=3676.42 00:16:47.159 clat (usec): min=40939, max=42104, avg=41761.89, stdev=421.16 00:16:47.159 lat (usec): min=40965, max=42113, avg=41786.65, stdev=420.54 00:16:47.159 clat percentiles (usec): 00:16:47.159 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:47.159 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:47.159 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:47.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:47.159 | 99.99th=[42206] 00:16:47.159 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:16:47.159 slat (nsec): min=9800, max=50223, avg=28645.92, stdev=9660.77 00:16:47.159 clat (usec): min=144, max=838, avg=422.81, stdev=103.35 00:16:47.159 lat (usec): min=155, max=872, avg=451.46, stdev=107.05 00:16:47.159 clat percentiles (usec): 00:16:47.159 | 1.00th=[ 206], 5.00th=[ 265], 10.00th=[ 285], 20.00th=[ 330], 00:16:47.159 | 30.00th=[ 367], 40.00th=[ 392], 50.00th=[ 420], 60.00th=[ 453], 00:16:47.159 | 70.00th=[ 482], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 594], 00:16:47.159 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 840], 99.95th=[ 840], 00:16:47.159 | 99.99th=[ 840] 00:16:47.159 bw ( KiB/s): min= 4096, max= 4096, per=43.35%, avg=4096.00, stdev= 0.00, samples=1 00:16:47.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:47.159 lat (usec) : 250=2.64%, 500=71.00%, 750=22.41%, 1000=0.38% 00:16:47.159 lat (msec) : 50=3.58% 00:16:47.159 cpu : usr=0.78%, sys=1.36%, ctx=533, majf=0, minf=1 00:16:47.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:47.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.159 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:47.159 00:16:47.159 Run status group 0 (all jobs): 00:16:47.159 READ: bw=2189KiB/s (2242kB/s), 69.5KiB/s-2046KiB/s (71.2kB/s-2095kB/s), io=2268KiB (2322kB), run=1001-1036msec 00:16:47.159 WRITE: bw=9448KiB/s (9675kB/s), 1977KiB/s-3640KiB/s (2024kB/s-3728kB/s), io=9788KiB (10.0MB), run=1001-1036msec 00:16:47.159 00:16:47.159 Disk stats (read/write): 00:16:47.159 nvme0n1: ios=36/512, merge=0/0, ticks=1347/233, in_queue=1580, util=83.77% 00:16:47.159 nvme0n2: ios=561/642, merge=0/0, ticks=821/287, in_queue=1108, util=88.65% 00:16:47.159 nvme0n3: ios=70/512, merge=0/0, ticks=597/286, in_queue=883, util=94.07% 00:16:47.159 nvme0n4: ios=71/512, merge=0/0, ticks=1079/208, in_queue=1287, util=94.11% 00:16:47.159 11:02:43 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:47.159 [global] 00:16:47.159 thread=1 00:16:47.159 invalidate=1 00:16:47.159 rw=randwrite 00:16:47.159 time_based=1 00:16:47.159 runtime=1 00:16:47.159 ioengine=libaio 00:16:47.159 direct=1 00:16:47.159 bs=4096 00:16:47.159 iodepth=1 00:16:47.159 norandommap=0 00:16:47.159 numjobs=1 00:16:47.159 00:16:47.159 verify_dump=1 00:16:47.159 verify_backlog=512 00:16:47.159 verify_state_save=0 00:16:47.159 do_verify=1 00:16:47.159 verify=crc32c-intel 00:16:47.159 [job0] 00:16:47.159 filename=/dev/nvme0n1 00:16:47.159 [job1] 00:16:47.159 filename=/dev/nvme0n2 00:16:47.159 [job2] 00:16:47.159 filename=/dev/nvme0n3 00:16:47.159 [job3] 00:16:47.159 filename=/dev/nvme0n4 00:16:47.159 Could not set queue depth (nvme0n1) 00:16:47.159 Could not set queue depth (nvme0n2) 00:16:47.159 Could not set queue depth (nvme0n3) 00:16:47.159 Could not set queue depth (nvme0n4) 00:16:47.425 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.425 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.425 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.425 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.425 fio-3.35 00:16:47.425 Starting 4 threads 00:16:48.844 00:16:48.844 job0: (groupid=0, jobs=1): err= 0: pid=331298: Wed May 15 11:02:45 2024 00:16:48.844 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:48.844 slat (nsec): min=6298, max=57608, avg=24633.41, stdev=3873.35 00:16:48.844 clat (usec): min=684, max=1309, avg=1032.25, stdev=113.61 00:16:48.844 lat (usec): min=709, max=1333, avg=1056.88, stdev=114.27 00:16:48.844 clat percentiles (usec): 00:16:48.844 | 1.00th=[ 734], 5.00th=[ 775], 10.00th=[ 865], 20.00th=[ 963], 00:16:48.844 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:16:48.844 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:16:48.844 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:16:48.844 | 99.99th=[ 1303] 00:16:48.845 write: IOPS=752, BW=3009KiB/s (3081kB/s)(3012KiB/1001msec); 0 zone resets 00:16:48.845 slat (nsec): min=7554, max=49217, avg=26276.70, stdev=8920.13 00:16:48.845 clat (usec): min=310, max=1989, avg=570.88, stdev=137.60 00:16:48.845 lat (usec): min=328, max=2035, avg=597.15, stdev=141.45 00:16:48.845 clat percentiles (usec): 00:16:48.845 | 1.00th=[ 322], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 453], 00:16:48.845 | 30.00th=[ 486], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 603], 00:16:48.845 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 750], 00:16:48.845 | 99.00th=[ 816], 99.50th=[ 922], 99.90th=[ 1991], 99.95th=[ 1991], 00:16:48.845 | 99.99th=[ 1991] 00:16:48.845 bw ( KiB/s): min= 4096, max= 4096, per=34.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.845 lat (usec) : 500=19.45%, 750=38.02%, 1000=13.12% 00:16:48.845 lat (msec) : 2=29.41% 00:16:48.845 cpu : usr=2.70%, sys=4.20%, ctx=1265, majf=0, minf=1 00:16:48.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 issued rwts: total=512,753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.845 job1: (groupid=0, jobs=1): err= 0: pid=331305: Wed May 15 11:02:45 2024 00:16:48.845 read: IOPS=671, BW=2685KiB/s (2750kB/s)(2688KiB/1001msec) 00:16:48.845 slat (nsec): min=2876, max=24984, avg=10035.88, stdev=5206.52 00:16:48.845 clat (usec): min=277, max=1012, avg=764.54, stdev=127.00 00:16:48.845 lat (usec): min=286, max=1020, avg=774.58, stdev=126.73 00:16:48.845 clat percentiles (usec): 00:16:48.845 | 1.00th=[ 465], 5.00th=[ 537], 10.00th=[ 578], 20.00th=[ 652], 00:16:48.845 | 30.00th=[ 693], 40.00th=[ 742], 50.00th=[ 783], 60.00th=[ 824], 00:16:48.845 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 947], 00:16:48.845 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1012], 99.95th=[ 1012], 00:16:48.845 | 99.99th=[ 1012] 00:16:48.845 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:48.845 slat (nsec): min=2280, max=53566, avg=10995.49, stdev=6938.37 00:16:48.845 clat (usec): min=96, max=874, avg=451.79, stdev=118.25 00:16:48.845 lat (usec): min=100, max=885, avg=462.78, stdev=119.85 00:16:48.845 clat percentiles (usec): 00:16:48.845 | 1.00th=[ 127], 5.00th=[ 247], 10.00th=[ 297], 20.00th=[ 359], 00:16:48.845 | 30.00th=[ 392], 40.00th=[ 429], 50.00th=[ 461], 60.00th=[ 482], 00:16:48.845 | 70.00th=[ 510], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 644], 00:16:48.845 | 99.00th=[ 709], 99.50th=[ 717], 99.90th=[ 791], 99.95th=[ 873], 00:16:48.845 | 99.99th=[ 873] 00:16:48.845 bw ( KiB/s): min= 4096, max= 4096, per=34.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.845 lat (usec) : 100=0.12%, 250=3.07%, 500=38.15%, 750=35.08%, 1000=23.53% 00:16:48.845 lat (msec) : 2=0.06% 00:16:48.845 cpu : usr=0.50%, sys=1.90%, ctx=1698, majf=0, minf=1 00:16:48.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 issued rwts: total=672,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.845 job2: (groupid=0, jobs=1): err= 0: pid=331319: Wed May 15 11:02:45 2024 00:16:48.845 read: IOPS=123, BW=496KiB/s (507kB/s)(496KiB/1001msec) 00:16:48.845 slat (nsec): min=7468, max=46566, avg=24259.10, stdev=3477.50 00:16:48.845 clat (usec): min=786, max=42649, avg=5470.05, stdev=12581.44 00:16:48.845 lat (usec): min=810, max=42673, avg=5494.31, stdev=12581.29 00:16:48.845 clat percentiles (usec): 00:16:48.845 | 1.00th=[ 840], 5.00th=[ 1037], 10.00th=[ 1057], 20.00th=[ 1106], 00:16:48.845 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1221], 00:16:48.845 | 70.00th=[ 1287], 80.00th=[ 1319], 90.00th=[41157], 95.00th=[42206], 00:16:48.845 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:48.845 | 99.99th=[42730] 00:16:48.845 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:48.845 slat (nsec): min=3429, max=48924, avg=25643.93, stdev=8827.54 00:16:48.845 clat (usec): min=158, max=1052, avg=589.61, stdev=134.61 00:16:48.845 lat (usec): min=163, max=1083, avg=615.25, stdev=138.34 00:16:48.845 clat percentiles (usec): 00:16:48.845 | 1.00th=[ 233], 5.00th=[ 355], 10.00th=[ 404], 20.00th=[ 478], 00:16:48.845 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:16:48.845 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 791], 00:16:48.845 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 1057], 99.95th=[ 1057], 00:16:48.845 | 99.99th=[ 1057] 00:16:48.845 bw ( KiB/s): min= 4096, max= 4096, per=34.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.845 lat (usec) : 250=0.94%, 500=18.40%, 750=52.99%, 1000=8.49% 00:16:48.845 lat (msec) : 2=17.14%, 50=2.04% 00:16:48.845 cpu : usr=0.50%, sys=2.00%, ctx=636, majf=0, minf=1 00:16:48.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 issued rwts: total=124,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.845 job3: (groupid=0, jobs=1): err= 0: pid=331326: Wed May 15 11:02:45 2024 00:16:48.845 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:48.845 slat (nsec): min=7518, max=55373, avg=25015.54, stdev=2953.94 00:16:48.845 clat (usec): min=694, max=1194, avg=1022.09, stdev=84.67 00:16:48.845 lat (usec): min=720, max=1219, avg=1047.11, stdev=84.84 00:16:48.845 clat percentiles (usec): 00:16:48.845 | 1.00th=[ 791], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 963], 00:16:48.845 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:16:48.845 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:16:48.845 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1188], 99.95th=[ 1188], 00:16:48.845 | 99.99th=[ 1188] 00:16:48.845 write: IOPS=703, BW=2813KiB/s (2881kB/s)(2816KiB/1001msec); 0 zone resets 00:16:48.845 slat (nsec): min=9422, max=68300, avg=28941.29, stdev=8407.47 00:16:48.845 clat (usec): min=200, max=1001, avg=615.82, stdev=129.12 00:16:48.845 lat (usec): min=232, max=1031, avg=644.76, stdev=131.55 00:16:48.845 clat percentiles (usec): 00:16:48.845 | 1.00th=[ 297], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 510], 00:16:48.845 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 660], 00:16:48.845 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 807], 00:16:48.845 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1004], 99.95th=[ 1004], 00:16:48.845 | 99.99th=[ 1004] 00:16:48.845 bw ( KiB/s): min= 4096, max= 4096, per=34.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.845 lat (usec) : 250=0.25%, 500=10.12%, 750=40.46%, 1000=20.15% 00:16:48.845 lat (msec) : 2=29.03% 00:16:48.845 cpu : usr=2.20%, sys=3.20%, ctx=1217, majf=0, minf=1 00:16:48.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.845 issued rwts: total=512,704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.845 00:16:48.845 Run status group 0 (all jobs): 00:16:48.845 READ: bw=7273KiB/s (7447kB/s), 496KiB/s-2685KiB/s (507kB/s-2750kB/s), io=7280KiB (7455kB), run=1001-1001msec 00:16:48.845 WRITE: bw=11.7MiB/s (12.2MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:16:48.845 00:16:48.845 Disk stats (read/write): 00:16:48.845 nvme0n1: ios=545/512, merge=0/0, ticks=540/225, in_queue=765, util=87.88% 00:16:48.845 nvme0n2: ios=562/949, merge=0/0, ticks=583/415, in_queue=998, util=96.73% 00:16:48.845 nvme0n3: ios=79/512, merge=0/0, ticks=500/288, in_queue=788, util=88.48% 00:16:48.845 nvme0n4: ios=518/512, merge=0/0, ticks=641/303, in_queue=944, util=96.26% 00:16:48.845 11:02:45 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:48.845 [global] 00:16:48.845 thread=1 00:16:48.845 invalidate=1 00:16:48.845 rw=write 00:16:48.845 time_based=1 00:16:48.845 runtime=1 00:16:48.845 ioengine=libaio 00:16:48.845 direct=1 00:16:48.845 bs=4096 00:16:48.845 iodepth=128 00:16:48.845 norandommap=0 00:16:48.845 numjobs=1 00:16:48.845 00:16:48.845 verify_dump=1 00:16:48.845 verify_backlog=512 00:16:48.845 verify_state_save=0 00:16:48.845 do_verify=1 00:16:48.845 verify=crc32c-intel 00:16:48.845 [job0] 00:16:48.845 filename=/dev/nvme0n1 00:16:48.845 [job1] 00:16:48.845 filename=/dev/nvme0n2 00:16:48.845 [job2] 00:16:48.845 filename=/dev/nvme0n3 00:16:48.845 [job3] 00:16:48.845 filename=/dev/nvme0n4 00:16:48.845 Could not set queue depth (nvme0n1) 00:16:48.845 Could not set queue depth (nvme0n2) 00:16:48.845 Could not set queue depth (nvme0n3) 00:16:48.845 Could not set queue depth (nvme0n4) 00:16:49.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:49.111 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:49.111 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:49.111 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:49.111 fio-3.35 00:16:49.111 Starting 4 threads 00:16:50.522 00:16:50.522 job0: (groupid=0, jobs=1): err= 0: pid=331813: Wed May 15 11:02:46 2024 00:16:50.522 read: IOPS=9179, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1004msec) 00:16:50.522 slat (nsec): min=838, max=6938.1k, avg=54926.62, stdev=352470.26 00:16:50.522 clat (usec): min=3018, max=26084, avg=7129.62, stdev=1732.82 00:16:50.522 lat (usec): min=3023, max=28425, avg=7184.55, stdev=1758.17 00:16:50.522 clat percentiles (usec): 00:16:50.522 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 6259], 00:16:50.522 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:16:50.522 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8455], 95.00th=[ 8979], 00:16:50.522 | 99.00th=[14877], 99.50th=[20317], 99.90th=[24773], 99.95th=[26084], 00:16:50.522 | 99.99th=[26084] 00:16:50.522 write: IOPS=9273, BW=36.2MiB/s (38.0MB/s)(36.4MiB/1004msec); 0 zone resets 00:16:50.522 slat (nsec): min=1462, max=5189.7k, avg=47430.51, stdev=233276.23 00:16:50.522 clat (usec): min=2521, max=27661, avg=6602.50, stdev=1875.75 00:16:50.522 lat (usec): min=2530, max=27663, avg=6649.93, stdev=1890.80 00:16:50.522 clat percentiles (usec): 00:16:50.522 | 1.00th=[ 3392], 5.00th=[ 4015], 10.00th=[ 5014], 20.00th=[ 6063], 00:16:50.522 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6652], 00:16:50.522 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7439], 95.00th=[ 8717], 00:16:50.522 | 99.00th=[14353], 99.50th=[19530], 99.90th=[24249], 99.95th=[24249], 00:16:50.522 | 99.99th=[27657] 00:16:50.522 bw ( KiB/s): min=36864, max=36864, per=38.46%, avg=36864.00, stdev= 0.00, samples=2 00:16:50.522 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:16:50.522 lat (msec) : 4=2.56%, 10=94.94%, 20=2.01%, 50=0.49% 00:16:50.522 cpu : usr=4.79%, sys=5.98%, ctx=1088, majf=0, minf=1 00:16:50.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:50.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.522 issued rwts: total=9216,9311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.522 job1: (groupid=0, jobs=1): err= 0: pid=331821: Wed May 15 11:02:46 2024 00:16:50.522 read: IOPS=2555, BW=9.98MiB/s (10.5MB/s)(10.5MiB/1051msec) 00:16:50.522 slat (nsec): min=904, max=17591k, avg=155176.09, stdev=1033172.38 00:16:50.522 clat (msec): min=3, max=141, avg=18.53, stdev=20.21 00:16:50.522 lat (msec): min=3, max=141, avg=18.68, stdev=20.33 00:16:50.522 clat percentiles (msec): 00:16:50.522 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:16:50.522 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 15], 00:16:50.522 | 70.00th=[ 19], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 35], 00:16:50.522 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 142], 00:16:50.522 | 99.99th=[ 142] 00:16:50.522 write: IOPS=2922, BW=11.4MiB/s (12.0MB/s)(12.0MiB/1051msec); 0 zone resets 00:16:50.522 slat (nsec): min=1585, max=16833k, avg=187995.68, stdev=1001916.46 00:16:50.522 clat (msec): min=2, max=141, avg=27.19, stdev=21.10 00:16:50.522 lat (msec): min=2, max=141, avg=27.38, stdev=21.23 00:16:50.522 clat percentiles (msec): 00:16:50.522 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 16], 00:16:50.522 | 30.00th=[ 19], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 20], 00:16:50.522 | 70.00th=[ 26], 80.00th=[ 43], 90.00th=[ 54], 95.00th=[ 83], 00:16:50.522 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 100], 99.95th=[ 100], 00:16:50.522 | 99.99th=[ 142] 00:16:50.522 bw ( KiB/s): min=12272, max=12288, per=12.81%, avg=12280.00, stdev=11.31, samples=2 00:16:50.522 iops : min= 3068, max= 3072, avg=3070.00, stdev= 2.83, samples=2 00:16:50.522 lat (msec) : 4=1.13%, 10=13.77%, 20=57.16%, 50=17.47%, 100=9.26% 00:16:50.522 lat (msec) : 250=1.22% 00:16:50.522 cpu : usr=1.81%, sys=2.48%, ctx=377, majf=0, minf=1 00:16:50.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:50.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.522 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.522 job2: (groupid=0, jobs=1): err= 0: pid=331827: Wed May 15 11:02:46 2024 00:16:50.522 read: IOPS=8551, BW=33.4MiB/s (35.0MB/s)(33.6MiB/1005msec) 00:16:50.522 slat (nsec): min=954, max=6945.8k, avg=60711.82, stdev=435073.45 00:16:50.522 clat (usec): min=1948, max=14207, avg=8024.70, stdev=1877.46 00:16:50.522 lat (usec): min=3318, max=14228, avg=8085.41, stdev=1901.01 00:16:50.522 clat percentiles (usec): 00:16:50.522 | 1.00th=[ 4883], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6587], 00:16:50.522 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7832], 00:16:50.522 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[11994], 00:16:50.522 | 99.00th=[13304], 99.50th=[13566], 99.90th=[13829], 99.95th=[13960], 00:16:50.522 | 99.99th=[14222] 00:16:50.522 write: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec); 0 zone resets 00:16:50.522 slat (nsec): min=1595, max=5971.3k, avg=49993.00, stdev=294314.74 00:16:50.522 clat (usec): min=1135, max=13833, avg=6728.35, stdev=1437.85 00:16:50.522 lat (usec): min=1144, max=13835, avg=6778.35, stdev=1446.21 00:16:50.522 clat percentiles (usec): 00:16:50.522 | 1.00th=[ 2638], 5.00th=[ 3949], 10.00th=[ 4490], 20.00th=[ 5473], 00:16:50.522 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:16:50.522 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7701], 95.00th=[ 7832], 00:16:50.522 | 99.00th=[10159], 99.50th=[11076], 99.90th=[13566], 99.95th=[13829], 00:16:50.522 | 99.99th=[13829] 00:16:50.522 bw ( KiB/s): min=33976, max=35656, per=36.33%, avg=34816.00, stdev=1187.94, samples=2 00:16:50.522 iops : min= 8494, max= 8914, avg=8704.00, stdev=296.98, samples=2 00:16:50.522 lat (msec) : 2=0.02%, 4=2.95%, 10=89.14%, 20=7.89% 00:16:50.522 cpu : usr=8.17%, sys=6.57%, ctx=793, majf=0, minf=1 00:16:50.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:50.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.523 issued rwts: total=8594,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.523 job3: (groupid=0, jobs=1): err= 0: pid=331834: Wed May 15 11:02:46 2024 00:16:50.523 read: IOPS=3828, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1010msec) 00:16:50.523 slat (nsec): min=902, max=18061k, avg=119347.75, stdev=905309.52 00:16:50.523 clat (usec): min=2782, max=38130, avg=14230.50, stdev=5372.31 00:16:50.523 lat (usec): min=5581, max=38133, avg=14349.85, stdev=5437.17 00:16:50.523 clat percentiles (usec): 00:16:50.523 | 1.00th=[ 6718], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[11207], 00:16:50.523 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:16:50.523 | 70.00th=[14746], 80.00th=[17957], 90.00th=[20055], 95.00th=[26870], 00:16:50.523 | 99.00th=[34341], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:16:50.523 | 99.99th=[38011] 00:16:50.523 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:16:50.523 slat (nsec): min=1531, max=22445k, avg=127299.01, stdev=796597.17 00:16:50.523 clat (usec): min=1225, max=51484, avg=17823.31, stdev=10978.32 00:16:50.523 lat (usec): min=1265, max=51486, avg=17950.61, stdev=11048.33 00:16:50.523 clat percentiles (usec): 00:16:50.523 | 1.00th=[ 4293], 5.00th=[ 6718], 10.00th=[ 7177], 20.00th=[ 9765], 00:16:50.523 | 30.00th=[10421], 40.00th=[12256], 50.00th=[17171], 60.00th=[18744], 00:16:50.523 | 70.00th=[19530], 80.00th=[19530], 90.00th=[33162], 95.00th=[49021], 00:16:50.523 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:16:50.523 | 99.99th=[51643] 00:16:50.523 bw ( KiB/s): min=14992, max=17776, per=17.09%, avg=16384.00, stdev=1968.59, samples=2 00:16:50.523 iops : min= 3748, max= 4444, avg=4096.00, stdev=492.15, samples=2 00:16:50.523 lat (msec) : 2=0.01%, 4=0.24%, 10=16.40%, 20=68.68%, 50=12.47% 00:16:50.523 lat (msec) : 100=2.20% 00:16:50.523 cpu : usr=2.68%, sys=3.17%, ctx=371, majf=0, minf=2 00:16:50.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:50.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.523 issued rwts: total=3867,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.523 00:16:50.523 Run status group 0 (all jobs): 00:16:50.523 READ: bw=90.5MiB/s (94.9MB/s), 9.98MiB/s-35.9MiB/s (10.5MB/s-37.6MB/s), io=95.2MiB (99.8MB), run=1004-1051msec 00:16:50.523 WRITE: bw=93.6MiB/s (98.1MB/s), 11.4MiB/s-36.2MiB/s (12.0MB/s-38.0MB/s), io=98.4MiB (103MB), run=1004-1051msec 00:16:50.523 00:16:50.523 Disk stats (read/write): 00:16:50.523 nvme0n1: ios=7720/7799, merge=0/0, ticks=34475/30826, in_queue=65301, util=89.68% 00:16:50.523 nvme0n2: ios=2609/2591, merge=0/0, ticks=35812/67899, in_queue=103711, util=88.18% 00:16:50.523 nvme0n3: ios=7200/7216, merge=0/0, ticks=54997/46793, in_queue=101790, util=92.31% 00:16:50.523 nvme0n4: ios=3129/3271, merge=0/0, ticks=39884/58716, in_queue=98600, util=94.46% 00:16:50.523 11:02:46 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:50.523 [global] 00:16:50.523 thread=1 00:16:50.523 invalidate=1 00:16:50.523 rw=randwrite 00:16:50.523 time_based=1 00:16:50.523 runtime=1 00:16:50.523 ioengine=libaio 00:16:50.523 direct=1 00:16:50.523 bs=4096 00:16:50.523 iodepth=128 00:16:50.523 norandommap=0 00:16:50.523 numjobs=1 00:16:50.523 00:16:50.523 verify_dump=1 00:16:50.523 verify_backlog=512 00:16:50.523 verify_state_save=0 00:16:50.523 do_verify=1 00:16:50.523 verify=crc32c-intel 00:16:50.523 [job0] 00:16:50.523 filename=/dev/nvme0n1 00:16:50.523 [job1] 00:16:50.523 filename=/dev/nvme0n2 00:16:50.523 [job2] 00:16:50.523 filename=/dev/nvme0n3 00:16:50.523 [job3] 00:16:50.523 filename=/dev/nvme0n4 00:16:50.523 Could not set queue depth (nvme0n1) 00:16:50.523 Could not set queue depth (nvme0n2) 00:16:50.523 Could not set queue depth (nvme0n3) 00:16:50.523 Could not set queue depth (nvme0n4) 00:16:50.792 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.792 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.792 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.792 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.792 fio-3.35 00:16:50.792 Starting 4 threads 00:16:52.208 00:16:52.208 job0: (groupid=0, jobs=1): err= 0: pid=332344: Wed May 15 11:02:48 2024 00:16:52.208 read: IOPS=7095, BW=27.7MiB/s (29.1MB/s)(28.0MiB/1009msec) 00:16:52.208 slat (nsec): min=909, max=15376k, avg=74019.04, stdev=603901.89 00:16:52.208 clat (usec): min=968, max=31838, avg=9778.09, stdev=3046.74 00:16:52.208 lat (usec): min=3302, max=32796, avg=9852.11, stdev=3095.57 00:16:52.208 clat percentiles (usec): 00:16:52.208 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7439], 00:16:52.208 | 30.00th=[ 7832], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10159], 00:16:52.208 | 70.00th=[10421], 80.00th=[11469], 90.00th=[14746], 95.00th=[16450], 00:16:52.208 | 99.00th=[17695], 99.50th=[18744], 99.90th=[22414], 99.95th=[22938], 00:16:52.208 | 99.99th=[31851] 00:16:52.208 write: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec); 0 zone resets 00:16:52.208 slat (nsec): min=1550, max=8236.5k, avg=55609.79, stdev=382737.63 00:16:52.208 clat (usec): min=533, max=25553, avg=8084.34, stdev=2972.74 00:16:52.208 lat (usec): min=565, max=25555, avg=8139.95, stdev=2989.35 00:16:52.208 clat percentiles (usec): 00:16:52.208 | 1.00th=[ 2474], 5.00th=[ 3949], 10.00th=[ 4555], 20.00th=[ 5735], 00:16:52.208 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7832], 60.00th=[ 9241], 00:16:52.208 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[13829], 00:16:52.208 | 99.00th=[15664], 99.50th=[17171], 99.90th=[22414], 99.95th=[22414], 00:16:52.208 | 99.99th=[25560] 00:16:52.208 bw ( KiB/s): min=24576, max=32768, per=26.43%, avg=28672.00, stdev=5792.62, samples=2 00:16:52.208 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:16:52.208 lat (usec) : 750=0.06%, 1000=0.01% 00:16:52.208 lat (msec) : 2=0.34%, 4=2.83%, 10=59.80%, 20=36.62%, 50=0.35% 00:16:52.208 cpu : usr=4.96%, sys=6.94%, ctx=550, majf=0, minf=1 00:16:52.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:52.209 issued rwts: total=7159,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:52.209 job1: (groupid=0, jobs=1): err= 0: pid=332352: Wed May 15 11:02:48 2024 00:16:52.209 read: IOPS=8077, BW=31.6MiB/s (33.1MB/s)(31.6MiB/1003msec) 00:16:52.209 slat (nsec): min=907, max=8392.9k, avg=62595.35, stdev=448597.44 00:16:52.209 clat (usec): min=2636, max=30896, avg=8102.48, stdev=2659.96 00:16:52.209 lat (usec): min=2642, max=30905, avg=8165.08, stdev=2690.17 00:16:52.209 clat percentiles (usec): 00:16:52.209 | 1.00th=[ 3752], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6456], 00:16:52.209 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 8029], 00:16:52.209 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[12256], 00:16:52.209 | 99.00th=[19006], 99.50th=[23462], 99.90th=[28705], 99.95th=[30802], 00:16:52.209 | 99.99th=[30802] 00:16:52.209 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:16:52.209 slat (nsec): min=1576, max=6577.6k, avg=55193.41, stdev=338382.32 00:16:52.209 clat (usec): min=1979, max=30861, avg=7515.81, stdev=3442.26 00:16:52.209 lat (usec): min=1986, max=30863, avg=7571.00, stdev=3467.65 00:16:52.209 clat percentiles (usec): 00:16:52.209 | 1.00th=[ 2704], 5.00th=[ 3884], 10.00th=[ 4228], 20.00th=[ 5473], 00:16:52.209 | 30.00th=[ 6259], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7177], 00:16:52.209 | 70.00th=[ 7373], 80.00th=[ 7767], 90.00th=[12911], 95.00th=[16319], 00:16:52.209 | 99.00th=[19530], 99.50th=[20579], 99.90th=[23462], 99.95th=[23462], 00:16:52.209 | 99.99th=[30802] 00:16:52.209 bw ( KiB/s): min=28672, max=36864, per=30.21%, avg=32768.00, stdev=5792.62, samples=2 00:16:52.209 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:16:52.209 lat (msec) : 2=0.02%, 4=4.60%, 10=82.77%, 20=11.90%, 50=0.71% 00:16:52.209 cpu : usr=7.19%, sys=6.29%, ctx=700, majf=0, minf=1 00:16:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:52.209 issued rwts: total=8102,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:52.209 job2: (groupid=0, jobs=1): err= 0: pid=332361: Wed May 15 11:02:48 2024 00:16:52.209 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:16:52.209 slat (nsec): min=953, max=10864k, avg=73763.62, stdev=549586.37 00:16:52.209 clat (usec): min=2641, max=30400, avg=9604.79, stdev=2988.60 00:16:52.209 lat (usec): min=2651, max=30409, avg=9678.55, stdev=3031.41 00:16:52.209 clat percentiles (usec): 00:16:52.209 | 1.00th=[ 5211], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 7701], 00:16:52.209 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 9372], 00:16:52.209 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12780], 95.00th=[14877], 00:16:52.209 | 99.00th=[20841], 99.50th=[22152], 99.90th=[30278], 99.95th=[30278], 00:16:52.209 | 99.99th=[30278] 00:16:52.209 write: IOPS=6833, BW=26.7MiB/s (28.0MB/s)(26.9MiB/1007msec); 0 zone resets 00:16:52.209 slat (nsec): min=1590, max=7026.6k, avg=64547.57, stdev=399731.37 00:16:52.209 clat (usec): min=389, max=45673, avg=9234.41, stdev=5632.18 00:16:52.209 lat (usec): min=418, max=45679, avg=9298.96, stdev=5666.33 00:16:52.209 clat percentiles (usec): 00:16:52.209 | 1.00th=[ 1729], 5.00th=[ 3752], 10.00th=[ 4883], 20.00th=[ 5997], 00:16:52.209 | 30.00th=[ 6849], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 8586], 00:16:52.209 | 70.00th=[10159], 80.00th=[10683], 90.00th=[13173], 95.00th=[17957], 00:16:52.209 | 99.00th=[34866], 99.50th=[40633], 99.90th=[45876], 99.95th=[45876], 00:16:52.209 | 99.99th=[45876] 00:16:52.209 bw ( KiB/s): min=25320, max=28704, per=24.90%, avg=27012.00, stdev=2392.85, samples=2 00:16:52.209 iops : min= 6330, max= 7176, avg=6753.00, stdev=598.21, samples=2 00:16:52.209 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.14% 00:16:52.209 lat (msec) : 2=0.40%, 4=2.41%, 10=64.10%, 20=30.08%, 50=2.81% 00:16:52.209 cpu : usr=4.27%, sys=8.15%, ctx=566, majf=0, minf=1 00:16:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:52.209 issued rwts: total=6656,6881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:52.209 job3: (groupid=0, jobs=1): err= 0: pid=332368: Wed May 15 11:02:48 2024 00:16:52.209 read: IOPS=4989, BW=19.5MiB/s (20.4MB/s)(19.6MiB/1004msec) 00:16:52.209 slat (nsec): min=890, max=16378k, avg=95826.83, stdev=730645.21 00:16:52.209 clat (usec): min=747, max=46506, avg=12669.86, stdev=6742.64 00:16:52.209 lat (usec): min=1024, max=46529, avg=12765.68, stdev=6790.99 00:16:52.209 clat percentiles (usec): 00:16:52.209 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 8848], 00:16:52.209 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10814], 00:16:52.209 | 70.00th=[11994], 80.00th=[14615], 90.00th=[22676], 95.00th=[31065], 00:16:52.209 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:52.209 | 99.99th=[46400] 00:16:52.209 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:16:52.209 slat (nsec): min=1472, max=12699k, avg=94403.64, stdev=614919.97 00:16:52.209 clat (usec): min=1845, max=52703, avg=12080.29, stdev=8004.36 00:16:52.209 lat (usec): min=1851, max=52707, avg=12174.70, stdev=8051.21 00:16:52.209 clat percentiles (usec): 00:16:52.209 | 1.00th=[ 4359], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8586], 00:16:52.209 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10421], 00:16:52.209 | 70.00th=[10814], 80.00th=[11469], 90.00th=[15008], 95.00th=[31851], 00:16:52.209 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:16:52.209 | 99.99th=[52691] 00:16:52.209 bw ( KiB/s): min=16632, max=24328, per=18.88%, avg=20480.00, stdev=5441.89, samples=2 00:16:52.209 iops : min= 4158, max= 6082, avg=5120.00, stdev=1360.47, samples=2 00:16:52.209 lat (usec) : 750=0.01%, 1000=0.01% 00:16:52.209 lat (msec) : 2=0.39%, 4=0.47%, 10=42.38%, 20=47.42%, 50=8.79% 00:16:52.209 lat (msec) : 100=0.53% 00:16:52.209 cpu : usr=3.79%, sys=4.99%, ctx=377, majf=0, minf=2 00:16:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:52.209 issued rwts: total=5009,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:52.209 00:16:52.209 Run status group 0 (all jobs): 00:16:52.209 READ: bw=104MiB/s (109MB/s), 19.5MiB/s-31.6MiB/s (20.4MB/s-33.1MB/s), io=105MiB (110MB), run=1003-1009msec 00:16:52.209 WRITE: bw=106MiB/s (111MB/s), 19.9MiB/s-31.9MiB/s (20.9MB/s-33.5MB/s), io=107MiB (112MB), run=1003-1009msec 00:16:52.209 00:16:52.210 Disk stats (read/write): 00:16:52.210 nvme0n1: ios=6197/6255, merge=0/0, ticks=55071/45943, in_queue=101014, util=96.99% 00:16:52.210 nvme0n2: ios=6568/6656, merge=0/0, ticks=51605/49738, in_queue=101343, util=100.00% 00:16:52.210 nvme0n3: ios=5669/5967, merge=0/0, ticks=43701/45843, in_queue=89544, util=98.52% 00:16:52.210 nvme0n4: ios=3763/4096, merge=0/0, ticks=23851/20923, in_queue=44774, util=89.55% 00:16:52.210 11:02:48 -- target/fio.sh@55 -- # sync 00:16:52.210 11:02:48 -- target/fio.sh@59 -- # fio_pid=332662 00:16:52.210 11:02:48 -- target/fio.sh@61 -- # sleep 3 00:16:52.210 11:02:48 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:52.210 [global] 00:16:52.210 thread=1 00:16:52.210 invalidate=1 00:16:52.210 rw=read 00:16:52.210 time_based=1 00:16:52.210 runtime=10 00:16:52.210 ioengine=libaio 00:16:52.210 direct=1 00:16:52.210 bs=4096 00:16:52.210 iodepth=1 00:16:52.210 norandommap=1 00:16:52.210 numjobs=1 00:16:52.210 00:16:52.210 [job0] 00:16:52.210 filename=/dev/nvme0n1 00:16:52.210 [job1] 00:16:52.210 filename=/dev/nvme0n2 00:16:52.210 [job2] 00:16:52.210 filename=/dev/nvme0n3 00:16:52.210 [job3] 00:16:52.210 filename=/dev/nvme0n4 00:16:52.210 Could not set queue depth (nvme0n1) 00:16:52.210 Could not set queue depth (nvme0n2) 00:16:52.210 Could not set queue depth (nvme0n3) 00:16:52.210 Could not set queue depth (nvme0n4) 00:16:52.474 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.474 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.474 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.474 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.474 fio-3.35 00:16:52.474 Starting 4 threads 00:16:55.015 11:02:51 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:55.276 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2596864, buflen=4096 00:16:55.276 fio: pid=332918, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:55.276 11:02:51 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:55.276 11:02:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.276 11:02:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:55.276 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=15962112, buflen=4096 00:16:55.276 fio: pid=332906, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:55.536 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=13504512, buflen=4096 00:16:55.536 fio: pid=332872, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:55.536 11:02:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.536 11:02:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:55.536 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5545984, buflen=4096 00:16:55.536 fio: pid=332877, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:55.536 11:02:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.798 11:02:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:55.798 00:16:55.798 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332872: Wed May 15 11:02:52 2024 00:16:55.798 read: IOPS=1170, BW=4680KiB/s (4792kB/s)(12.9MiB/2818msec) 00:16:55.798 slat (usec): min=5, max=25703, avg=35.52, stdev=518.70 00:16:55.798 clat (usec): min=204, max=41924, avg=810.26, stdev=1012.10 00:16:55.798 lat (usec): min=210, max=41948, avg=845.79, stdev=1137.74 00:16:55.798 clat percentiles (usec): 00:16:55.798 | 1.00th=[ 383], 5.00th=[ 519], 10.00th=[ 586], 20.00th=[ 660], 00:16:55.798 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 791], 60.00th=[ 824], 00:16:55.798 | 70.00th=[ 865], 80.00th=[ 922], 90.00th=[ 996], 95.00th=[ 1037], 00:16:55.798 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1369], 99.95th=[40633], 00:16:55.798 | 99.99th=[41681] 00:16:55.798 bw ( KiB/s): min= 3696, max= 5312, per=37.95%, avg=4683.20, stdev=749.92, samples=5 00:16:55.798 iops : min= 924, max= 1328, avg=1170.80, stdev=187.48, samples=5 00:16:55.798 lat (usec) : 250=0.09%, 500=4.21%, 750=34.99%, 1000=51.43% 00:16:55.798 lat (msec) : 2=9.19%, 50=0.06% 00:16:55.798 cpu : usr=1.53%, sys=4.12%, ctx=3303, majf=0, minf=1 00:16:55.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 issued rwts: total=3298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.798 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332877: Wed May 15 11:02:52 2024 00:16:55.798 read: IOPS=455, BW=1820KiB/s (1864kB/s)(5416KiB/2976msec) 00:16:55.798 slat (usec): min=6, max=11265, avg=47.40, stdev=452.56 00:16:55.798 clat (usec): min=292, max=42343, avg=2137.49, stdev=7280.27 00:16:55.798 lat (usec): min=310, max=42367, avg=2184.91, stdev=7290.98 00:16:55.798 clat percentiles (usec): 00:16:55.798 | 1.00th=[ 379], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 701], 00:16:55.798 | 30.00th=[ 750], 40.00th=[ 791], 50.00th=[ 832], 60.00th=[ 873], 00:16:55.798 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 1004], 00:16:55.798 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:55.798 | 99.99th=[42206] 00:16:55.798 bw ( KiB/s): min= 96, max= 4736, per=15.71%, avg=1939.20, stdev=2524.01, samples=5 00:16:55.798 iops : min= 24, max= 1184, avg=484.80, stdev=631.00, samples=5 00:16:55.798 lat (usec) : 500=1.99%, 750=28.19%, 1000=64.43% 00:16:55.798 lat (msec) : 2=2.07%, 50=3.25% 00:16:55.798 cpu : usr=0.40%, sys=1.34%, ctx=1360, majf=0, minf=1 00:16:55.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 issued rwts: total=1355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.798 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332906: Wed May 15 11:02:52 2024 00:16:55.798 read: IOPS=1445, BW=5780KiB/s (5918kB/s)(15.2MiB/2697msec) 00:16:55.798 slat (usec): min=5, max=14797, avg=29.86, stdev=297.49 00:16:55.798 clat (usec): min=190, max=1235, avg=649.83, stdev=112.82 00:16:55.798 lat (usec): min=196, max=15562, avg=679.70, stdev=321.20 00:16:55.798 clat percentiles (usec): 00:16:55.798 | 1.00th=[ 322], 5.00th=[ 449], 10.00th=[ 502], 20.00th=[ 562], 00:16:55.798 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 693], 00:16:55.798 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 807], 00:16:55.798 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 979], 00:16:55.798 | 99.99th=[ 1237] 00:16:55.798 bw ( KiB/s): min= 5672, max= 6264, per=47.80%, avg=5899.20, stdev=218.69, samples=5 00:16:55.798 iops : min= 1418, max= 1566, avg=1474.80, stdev=54.67, samples=5 00:16:55.798 lat (usec) : 250=0.31%, 500=9.44%, 750=71.86%, 1000=18.34% 00:16:55.798 lat (msec) : 2=0.03% 00:16:55.798 cpu : usr=1.97%, sys=5.42%, ctx=3900, majf=0, minf=1 00:16:55.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 issued rwts: total=3898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.798 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332918: Wed May 15 11:02:52 2024 00:16:55.798 read: IOPS=251, BW=1005KiB/s (1029kB/s)(2536KiB/2524msec) 00:16:55.798 slat (nsec): min=6064, max=53590, avg=23388.12, stdev=6297.61 00:16:55.798 clat (usec): min=336, max=43081, avg=3910.76, stdev=10986.74 00:16:55.798 lat (usec): min=343, max=43111, avg=3934.14, stdev=10987.32 00:16:55.798 clat percentiles (usec): 00:16:55.798 | 1.00th=[ 445], 5.00th=[ 545], 10.00th=[ 578], 20.00th=[ 644], 00:16:55.798 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 791], 00:16:55.798 | 70.00th=[ 824], 80.00th=[ 873], 90.00th=[ 955], 95.00th=[42206], 00:16:55.798 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:16:55.798 | 99.99th=[43254] 00:16:55.798 bw ( KiB/s): min= 96, max= 4128, per=8.22%, avg=1014.40, stdev=1742.81, samples=5 00:16:55.798 iops : min= 24, max= 1032, avg=253.60, stdev=435.70, samples=5 00:16:55.798 lat (usec) : 500=2.99%, 750=46.77%, 1000=41.89% 00:16:55.798 lat (msec) : 2=0.47%, 50=7.72% 00:16:55.798 cpu : usr=0.40%, sys=0.91%, ctx=635, majf=0, minf=2 00:16:55.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.798 issued rwts: total=635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.798 00:16:55.798 Run status group 0 (all jobs): 00:16:55.798 READ: bw=12.1MiB/s (12.6MB/s), 1005KiB/s-5780KiB/s (1029kB/s-5918kB/s), io=35.9MiB (37.6MB), run=2524-2976msec 00:16:55.798 00:16:55.798 Disk stats (read/write): 00:16:55.798 nvme0n1: ios=3277/0, merge=0/0, ticks=2379/0, in_queue=2379, util=91.95% 00:16:55.798 nvme0n2: ios=1219/0, merge=0/0, ticks=2718/0, in_queue=2718, util=94.20% 00:16:55.798 nvme0n3: ios=3749/0, merge=0/0, ticks=2072/0, in_queue=2072, util=95.64% 00:16:55.798 nvme0n4: ios=356/0, merge=0/0, ticks=2254/0, in_queue=2254, util=95.98% 00:16:55.798 11:02:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.798 11:02:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:56.060 11:02:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:56.060 11:02:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:56.060 11:02:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:56.060 11:02:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:56.321 11:02:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:56.321 11:02:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:56.583 11:02:53 -- target/fio.sh@69 -- # fio_status=0 00:16:56.583 11:02:53 -- target/fio.sh@70 -- # wait 332662 00:16:56.583 11:02:53 -- target/fio.sh@70 -- # fio_status=4 00:16:56.583 11:02:53 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.583 11:02:53 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.583 11:02:53 -- common/autotest_common.sh@1215 -- # local i=0 00:16:56.583 11:02:53 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:56.583 11:02:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.583 11:02:53 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:56.583 11:02:53 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.583 11:02:53 -- common/autotest_common.sh@1227 -- # return 0 00:16:56.583 11:02:53 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:56.583 11:02:53 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:56.583 nvmf hotplug test: fio failed as expected 00:16:56.583 11:02:53 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.843 11:02:53 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:56.843 11:02:53 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:56.843 11:02:53 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:56.843 11:02:53 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:56.843 11:02:53 -- target/fio.sh@91 -- # nvmftestfini 00:16:56.843 11:02:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:56.843 11:02:53 -- nvmf/common.sh@117 -- # sync 00:16:56.843 11:02:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.843 11:02:53 -- nvmf/common.sh@120 -- # set +e 00:16:56.843 11:02:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.843 11:02:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.843 rmmod nvme_tcp 00:16:56.843 rmmod nvme_fabrics 00:16:56.843 rmmod nvme_keyring 00:16:56.843 11:02:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.843 11:02:53 -- nvmf/common.sh@124 -- # set -e 00:16:56.843 11:02:53 -- nvmf/common.sh@125 -- # return 0 00:16:56.843 11:02:53 -- nvmf/common.sh@478 -- # '[' -n 329160 ']' 00:16:56.843 11:02:53 -- nvmf/common.sh@479 -- # killprocess 329160 00:16:56.843 11:02:53 -- common/autotest_common.sh@946 -- # '[' -z 329160 ']' 00:16:56.843 11:02:53 -- common/autotest_common.sh@950 -- # kill -0 329160 00:16:56.843 11:02:53 -- common/autotest_common.sh@951 -- # uname 00:16:56.843 11:02:53 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:56.843 11:02:53 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 329160 00:16:56.843 11:02:53 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:56.843 11:02:53 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:56.843 11:02:53 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 329160' 00:16:56.843 killing process with pid 329160 00:16:56.843 11:02:53 -- common/autotest_common.sh@965 -- # kill 329160 00:16:56.843 [2024-05-15 11:02:53.456507] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:56.843 11:02:53 -- common/autotest_common.sh@970 -- # wait 329160 00:16:57.105 11:02:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:57.105 11:02:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:57.105 11:02:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:57.105 11:02:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.105 11:02:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.105 11:02:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.105 11:02:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.105 11:02:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.019 11:02:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:59.019 00:16:59.019 real 0m28.192s 00:16:59.019 user 2m42.624s 00:16:59.019 sys 0m8.851s 00:16:59.019 11:02:55 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:59.019 11:02:55 -- common/autotest_common.sh@10 -- # set +x 00:16:59.019 ************************************ 00:16:59.019 END TEST nvmf_fio_target 00:16:59.019 ************************************ 00:16:59.282 11:02:55 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:59.282 11:02:55 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:59.282 11:02:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:59.282 11:02:55 -- common/autotest_common.sh@10 -- # set +x 00:16:59.282 ************************************ 00:16:59.282 START TEST nvmf_bdevio 00:16:59.282 ************************************ 00:16:59.282 11:02:55 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:59.282 * Looking for test storage... 00:16:59.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.282 11:02:55 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.282 11:02:55 -- nvmf/common.sh@7 -- # uname -s 00:16:59.282 11:02:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.282 11:02:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.282 11:02:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.282 11:02:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.282 11:02:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.282 11:02:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.282 11:02:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.282 11:02:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.282 11:02:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.282 11:02:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.282 11:02:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.282 11:02:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.282 11:02:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.282 11:02:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.282 11:02:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.282 11:02:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.282 11:02:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.282 11:02:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.282 11:02:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.282 11:02:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.282 11:02:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.282 11:02:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.282 11:02:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.282 11:02:55 -- paths/export.sh@5 -- # export PATH 00:16:59.282 11:02:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.282 11:02:55 -- nvmf/common.sh@47 -- # : 0 00:16:59.282 11:02:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.282 11:02:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.282 11:02:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.282 11:02:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.282 11:02:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.282 11:02:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.282 11:02:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.282 11:02:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.282 11:02:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.282 11:02:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.282 11:02:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:59.282 11:02:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:59.282 11:02:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.282 11:02:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:59.282 11:02:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:59.282 11:02:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:59.282 11:02:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.282 11:02:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.282 11:02:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.282 11:02:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:59.282 11:02:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:59.282 11:02:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.282 11:02:55 -- common/autotest_common.sh@10 -- # set +x 00:17:07.434 11:03:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:07.434 11:03:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.434 11:03:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.434 11:03:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.434 11:03:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.434 11:03:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.434 11:03:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.434 11:03:02 -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.434 11:03:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.434 11:03:02 -- nvmf/common.sh@296 -- # e810=() 00:17:07.434 11:03:02 -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.434 11:03:02 -- nvmf/common.sh@297 -- # x722=() 00:17:07.434 11:03:02 -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.434 11:03:02 -- nvmf/common.sh@298 -- # mlx=() 00:17:07.434 11:03:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.434 11:03:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.434 11:03:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.434 11:03:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:07.434 11:03:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.434 11:03:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.434 11:03:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:07.434 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:07.434 11:03:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.434 11:03:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:07.434 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:07.434 11:03:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:07.434 11:03:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.434 11:03:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.434 11:03:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.434 11:03:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.434 11:03:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:07.434 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:07.434 11:03:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.434 11:03:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.434 11:03:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.434 11:03:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.434 11:03:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.434 11:03:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:07.434 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:07.434 11:03:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.434 11:03:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:07.434 11:03:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:07.434 11:03:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:07.434 11:03:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:07.434 11:03:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.434 11:03:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.434 11:03:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.434 11:03:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:07.434 11:03:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.434 11:03:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.434 11:03:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:07.434 11:03:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.434 11:03:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.434 11:03:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:07.434 11:03:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:07.434 11:03:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.434 11:03:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.434 11:03:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.434 11:03:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.434 11:03:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:07.434 11:03:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:07.434 11:03:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:07.434 11:03:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:07.434 11:03:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:07.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:17:07.434 00:17:07.434 --- 10.0.0.2 ping statistics --- 00:17:07.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.434 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:17:07.434 11:03:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:07.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:17:07.434 00:17:07.434 --- 10.0.0.1 ping statistics --- 00:17:07.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.434 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:17:07.434 11:03:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.435 11:03:02 -- nvmf/common.sh@411 -- # return 0 00:17:07.435 11:03:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:07.435 11:03:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.435 11:03:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:07.435 11:03:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:07.435 11:03:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.435 11:03:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:07.435 11:03:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:07.435 11:03:02 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:07.435 11:03:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:07.435 11:03:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:07.435 11:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 11:03:02 -- nvmf/common.sh@470 -- # nvmfpid=338053 00:17:07.435 11:03:02 -- nvmf/common.sh@471 -- # waitforlisten 338053 00:17:07.435 11:03:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:07.435 11:03:02 -- common/autotest_common.sh@827 -- # '[' -z 338053 ']' 00:17:07.435 11:03:02 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.435 11:03:02 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:07.435 11:03:02 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.435 11:03:02 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:07.435 11:03:02 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 [2024-05-15 11:03:03.016958] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:07.435 [2024-05-15 11:03:03.017028] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.435 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.435 [2024-05-15 11:03:03.104117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.435 [2024-05-15 11:03:03.199024] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.435 [2024-05-15 11:03:03.199080] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.435 [2024-05-15 11:03:03.199088] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.435 [2024-05-15 11:03:03.199095] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.435 [2024-05-15 11:03:03.199101] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.435 [2024-05-15 11:03:03.199191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:07.435 [2024-05-15 11:03:03.199352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:07.435 [2024-05-15 11:03:03.199511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.435 [2024-05-15 11:03:03.199512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:07.435 11:03:03 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:07.435 11:03:03 -- common/autotest_common.sh@860 -- # return 0 00:17:07.435 11:03:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:07.435 11:03:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.435 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 11:03:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.435 11:03:03 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.435 11:03:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.435 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 [2024-05-15 11:03:03.869681] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.435 11:03:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.435 11:03:03 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.435 11:03:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.435 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 Malloc0 00:17:07.435 11:03:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.435 11:03:03 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.435 11:03:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.435 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 11:03:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.435 11:03:03 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.435 11:03:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.435 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 11:03:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.435 11:03:03 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.435 11:03:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.435 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:07.435 [2024-05-15 11:03:03.918641] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:07.435 [2024-05-15 11:03:03.918950] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.435 11:03:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.435 11:03:03 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:07.435 11:03:03 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:07.435 11:03:03 -- nvmf/common.sh@521 -- # config=() 00:17:07.435 11:03:03 -- nvmf/common.sh@521 -- # local subsystem config 00:17:07.435 11:03:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.435 11:03:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.435 { 00:17:07.435 "params": { 00:17:07.435 "name": "Nvme$subsystem", 00:17:07.435 "trtype": "$TEST_TRANSPORT", 00:17:07.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.435 "adrfam": "ipv4", 00:17:07.435 "trsvcid": "$NVMF_PORT", 00:17:07.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.435 "hdgst": ${hdgst:-false}, 00:17:07.435 "ddgst": ${ddgst:-false} 00:17:07.435 }, 00:17:07.435 "method": "bdev_nvme_attach_controller" 00:17:07.435 } 00:17:07.435 EOF 00:17:07.435 )") 00:17:07.435 11:03:03 -- nvmf/common.sh@543 -- # cat 00:17:07.435 11:03:03 -- nvmf/common.sh@545 -- # jq . 00:17:07.435 11:03:03 -- nvmf/common.sh@546 -- # IFS=, 00:17:07.435 11:03:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:07.435 "params": { 00:17:07.435 "name": "Nvme1", 00:17:07.435 "trtype": "tcp", 00:17:07.435 "traddr": "10.0.0.2", 00:17:07.435 "adrfam": "ipv4", 00:17:07.435 "trsvcid": "4420", 00:17:07.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.435 "hdgst": false, 00:17:07.435 "ddgst": false 00:17:07.435 }, 00:17:07.435 "method": "bdev_nvme_attach_controller" 00:17:07.435 }' 00:17:07.435 [2024-05-15 11:03:03.975152] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:07.435 [2024-05-15 11:03:03.975247] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338324 ] 00:17:07.435 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.435 [2024-05-15 11:03:04.045231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.697 [2024-05-15 11:03:04.120313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.697 [2024-05-15 11:03:04.120431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.697 [2024-05-15 11:03:04.120434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.697 I/O targets: 00:17:07.697 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:07.697 00:17:07.697 00:17:07.697 CUnit - A unit testing framework for C - Version 2.1-3 00:17:07.697 http://cunit.sourceforge.net/ 00:17:07.697 00:17:07.697 00:17:07.697 Suite: bdevio tests on: Nvme1n1 00:17:07.958 Test: blockdev write read block ...passed 00:17:07.958 Test: blockdev write zeroes read block ...passed 00:17:07.958 Test: blockdev write zeroes read no split ...passed 00:17:07.958 Test: blockdev write zeroes read split ...passed 00:17:07.958 Test: blockdev write zeroes read split partial ...passed 00:17:07.958 Test: blockdev reset ...[2024-05-15 11:03:04.435344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:07.958 [2024-05-15 11:03:04.435409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53980 (9): Bad file descriptor 00:17:07.958 [2024-05-15 11:03:04.490994] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:07.958 passed 00:17:07.958 Test: blockdev write read 8 blocks ...passed 00:17:07.958 Test: blockdev write read size > 128k ...passed 00:17:07.958 Test: blockdev write read invalid size ...passed 00:17:07.958 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:07.958 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:07.958 Test: blockdev write read max offset ...passed 00:17:08.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:08.219 Test: blockdev writev readv 8 blocks ...passed 00:17:08.219 Test: blockdev writev readv 30 x 1block ...passed 00:17:08.219 Test: blockdev writev readv block ...passed 00:17:08.219 Test: blockdev writev readv size > 128k ...passed 00:17:08.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:08.219 Test: blockdev comparev and writev ...[2024-05-15 11:03:04.754873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.219 [2024-05-15 11:03:04.754899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.219 [2024-05-15 11:03:04.754910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.219 [2024-05-15 11:03:04.754916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:08.219 [2024-05-15 11:03:04.755408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.220 [2024-05-15 11:03:04.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.755425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.220 [2024-05-15 11:03:04.755430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.755915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.220 [2024-05-15 11:03:04.755923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.755933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.220 [2024-05-15 11:03:04.755938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.756447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.220 [2024-05-15 11:03:04.756454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.756463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.220 [2024-05-15 11:03:04.756468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.220 passed 00:17:08.220 Test: blockdev nvme passthru rw ...passed 00:17:08.220 Test: blockdev nvme passthru vendor specific ...[2024-05-15 11:03:04.841171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.220 [2024-05-15 11:03:04.841181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.841495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.220 [2024-05-15 11:03:04.841502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.841740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.220 [2024-05-15 11:03:04.841750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.220 [2024-05-15 11:03:04.842114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.220 [2024-05-15 11:03:04.842121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:08.220 passed 00:17:08.220 Test: blockdev nvme admin passthru ...passed 00:17:08.481 Test: blockdev copy ...passed 00:17:08.481 00:17:08.481 Run Summary: Type Total Ran Passed Failed Inactive 00:17:08.481 suites 1 1 n/a 0 0 00:17:08.481 tests 23 23 23 0 0 00:17:08.481 asserts 152 152 152 0 n/a 00:17:08.481 00:17:08.481 Elapsed time = 1.206 seconds 00:17:08.481 11:03:05 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.481 11:03:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.482 11:03:05 -- common/autotest_common.sh@10 -- # set +x 00:17:08.482 11:03:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.482 11:03:05 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:08.482 11:03:05 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:08.482 11:03:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:08.482 11:03:05 -- nvmf/common.sh@117 -- # sync 00:17:08.482 11:03:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.482 11:03:05 -- nvmf/common.sh@120 -- # set +e 00:17:08.482 11:03:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.482 11:03:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.482 rmmod nvme_tcp 00:17:08.482 rmmod nvme_fabrics 00:17:08.482 rmmod nvme_keyring 00:17:08.482 11:03:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.482 11:03:05 -- nvmf/common.sh@124 -- # set -e 00:17:08.482 11:03:05 -- nvmf/common.sh@125 -- # return 0 00:17:08.482 11:03:05 -- nvmf/common.sh@478 -- # '[' -n 338053 ']' 00:17:08.482 11:03:05 -- nvmf/common.sh@479 -- # killprocess 338053 00:17:08.482 11:03:05 -- common/autotest_common.sh@946 -- # '[' -z 338053 ']' 00:17:08.482 11:03:05 -- common/autotest_common.sh@950 -- # kill -0 338053 00:17:08.482 11:03:05 -- common/autotest_common.sh@951 -- # uname 00:17:08.482 11:03:05 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:08.482 11:03:05 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 338053 00:17:08.743 11:03:05 -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:08.743 11:03:05 -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:08.743 11:03:05 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 338053' 00:17:08.743 killing process with pid 338053 00:17:08.743 11:03:05 -- common/autotest_common.sh@965 -- # kill 338053 00:17:08.743 [2024-05-15 11:03:05.167449] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:08.743 11:03:05 -- common/autotest_common.sh@970 -- # wait 338053 00:17:08.743 11:03:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:08.743 11:03:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:08.743 11:03:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:08.743 11:03:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.743 11:03:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.743 11:03:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.743 11:03:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.743 11:03:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.294 11:03:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.294 00:17:11.294 real 0m11.662s 00:17:11.294 user 0m12.687s 00:17:11.294 sys 0m5.721s 00:17:11.294 11:03:07 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:11.294 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:17:11.294 ************************************ 00:17:11.294 END TEST nvmf_bdevio 00:17:11.294 ************************************ 00:17:11.294 11:03:07 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:11.294 11:03:07 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:11.294 11:03:07 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:11.294 11:03:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:11.294 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:17:11.294 ************************************ 00:17:11.294 START TEST nvmf_bdevio_no_huge 00:17:11.294 ************************************ 00:17:11.294 11:03:07 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:11.294 * Looking for test storage... 00:17:11.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.294 11:03:07 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.294 11:03:07 -- nvmf/common.sh@7 -- # uname -s 00:17:11.294 11:03:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.294 11:03:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.294 11:03:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.294 11:03:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.294 11:03:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.294 11:03:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.294 11:03:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.294 11:03:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.294 11:03:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.294 11:03:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.294 11:03:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.294 11:03:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.294 11:03:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.294 11:03:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.294 11:03:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.294 11:03:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.294 11:03:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.294 11:03:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.294 11:03:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.294 11:03:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.295 11:03:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.295 11:03:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.295 11:03:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.295 11:03:07 -- paths/export.sh@5 -- # export PATH 00:17:11.295 11:03:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.295 11:03:07 -- nvmf/common.sh@47 -- # : 0 00:17:11.295 11:03:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.295 11:03:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.295 11:03:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.295 11:03:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.295 11:03:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.295 11:03:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.295 11:03:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.295 11:03:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.295 11:03:07 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.295 11:03:07 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.295 11:03:07 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:11.295 11:03:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:11.295 11:03:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.295 11:03:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:11.295 11:03:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:11.295 11:03:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:11.295 11:03:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.295 11:03:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.295 11:03:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.295 11:03:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:11.295 11:03:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:11.295 11:03:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.295 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:17:17.887 11:03:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:17.887 11:03:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.887 11:03:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.887 11:03:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.887 11:03:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.887 11:03:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.887 11:03:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.887 11:03:14 -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.887 11:03:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.887 11:03:14 -- nvmf/common.sh@296 -- # e810=() 00:17:17.887 11:03:14 -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.887 11:03:14 -- nvmf/common.sh@297 -- # x722=() 00:17:17.887 11:03:14 -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.887 11:03:14 -- nvmf/common.sh@298 -- # mlx=() 00:17:17.887 11:03:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.887 11:03:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.887 11:03:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.887 11:03:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.887 11:03:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.887 11:03:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.887 11:03:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:17.887 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:17.887 11:03:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.887 11:03:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:17.887 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:17.887 11:03:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.887 11:03:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.887 11:03:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.887 11:03:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.887 11:03:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:17.887 11:03:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.887 11:03:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:17.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:17.887 11:03:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.887 11:03:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.887 11:03:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.888 11:03:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:17.888 11:03:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.888 11:03:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:17.888 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:17.888 11:03:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.888 11:03:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:17.888 11:03:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:17.888 11:03:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:17.888 11:03:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:17.888 11:03:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:17.888 11:03:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.888 11:03:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.888 11:03:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.888 11:03:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.888 11:03:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.888 11:03:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.888 11:03:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.888 11:03:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.888 11:03:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.888 11:03:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.888 11:03:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.888 11:03:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.888 11:03:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.888 11:03:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.888 11:03:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.888 11:03:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.888 11:03:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.888 11:03:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.888 11:03:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.888 11:03:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:17:17.888 00:17:17.888 --- 10.0.0.2 ping statistics --- 00:17:17.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.888 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:17:17.888 11:03:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:17:17.888 00:17:17.888 --- 10.0.0.1 ping statistics --- 00:17:17.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.888 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:17.888 11:03:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.888 11:03:14 -- nvmf/common.sh@411 -- # return 0 00:17:17.888 11:03:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:17.888 11:03:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.888 11:03:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:17.888 11:03:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:17.888 11:03:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.888 11:03:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:17.888 11:03:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:17.888 11:03:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:17.888 11:03:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:17.888 11:03:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:17.888 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:17.888 11:03:14 -- nvmf/common.sh@470 -- # nvmfpid=343118 00:17:17.888 11:03:14 -- nvmf/common.sh@471 -- # waitforlisten 343118 00:17:17.888 11:03:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:17.888 11:03:14 -- common/autotest_common.sh@827 -- # '[' -z 343118 ']' 00:17:17.888 11:03:14 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.888 11:03:14 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:17.888 11:03:14 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.888 11:03:14 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:17.888 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:17.888 [2024-05-15 11:03:14.524502] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:17.888 [2024-05-15 11:03:14.524563] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:18.149 [2024-05-15 11:03:14.613510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.149 [2024-05-15 11:03:14.715649] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.149 [2024-05-15 11:03:14.715703] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.149 [2024-05-15 11:03:14.715711] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.149 [2024-05-15 11:03:14.715718] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.149 [2024-05-15 11:03:14.715724] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.149 [2024-05-15 11:03:14.715886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.149 [2024-05-15 11:03:14.716044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:18.150 [2024-05-15 11:03:14.716200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.150 [2024-05-15 11:03:14.716202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:18.722 11:03:15 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:18.722 11:03:15 -- common/autotest_common.sh@860 -- # return 0 00:17:18.722 11:03:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:18.722 11:03:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.722 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:18.722 11:03:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.722 11:03:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.722 11:03:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.722 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:18.722 [2024-05-15 11:03:15.361559] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.722 11:03:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.722 11:03:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.722 11:03:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.722 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:18.983 Malloc0 00:17:18.983 11:03:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.983 11:03:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.983 11:03:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.983 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:18.983 11:03:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.983 11:03:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.983 11:03:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.983 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:18.983 11:03:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.983 11:03:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.983 11:03:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.983 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:18.983 [2024-05-15 11:03:15.402749] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:18.983 [2024-05-15 11:03:15.403082] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.983 11:03:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.983 11:03:15 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:18.983 11:03:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:18.983 11:03:15 -- nvmf/common.sh@521 -- # config=() 00:17:18.983 11:03:15 -- nvmf/common.sh@521 -- # local subsystem config 00:17:18.983 11:03:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 11:03:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 11:03:15 -- nvmf/common.sh@543 -- # cat 00:17:18.983 11:03:15 -- nvmf/common.sh@545 -- # jq . 00:17:18.983 11:03:15 -- nvmf/common.sh@546 -- # IFS=, 00:17:18.983 11:03:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme1", 00:17:18.983 "trtype": "tcp", 00:17:18.983 "traddr": "10.0.0.2", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 }' 00:17:18.983 [2024-05-15 11:03:15.457059] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:18.983 [2024-05-15 11:03:15.457125] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid343325 ] 00:17:18.983 [2024-05-15 11:03:15.525136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.983 [2024-05-15 11:03:15.621189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.983 [2024-05-15 11:03:15.621306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.983 [2024-05-15 11:03:15.621309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.244 I/O targets: 00:17:19.244 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:19.244 00:17:19.244 00:17:19.244 CUnit - A unit testing framework for C - Version 2.1-3 00:17:19.244 http://cunit.sourceforge.net/ 00:17:19.244 00:17:19.244 00:17:19.244 Suite: bdevio tests on: Nvme1n1 00:17:19.244 Test: blockdev write read block ...passed 00:17:19.505 Test: blockdev write zeroes read block ...passed 00:17:19.505 Test: blockdev write zeroes read no split ...passed 00:17:19.505 Test: blockdev write zeroes read split ...passed 00:17:19.505 Test: blockdev write zeroes read split partial ...passed 00:17:19.505 Test: blockdev reset ...[2024-05-15 11:03:15.947153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.505 [2024-05-15 11:03:15.947208] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1625580 (9): Bad file descriptor 00:17:19.505 [2024-05-15 11:03:16.054498] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.505 passed 00:17:19.505 Test: blockdev write read 8 blocks ...passed 00:17:19.505 Test: blockdev write read size > 128k ...passed 00:17:19.505 Test: blockdev write read invalid size ...passed 00:17:19.505 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:19.505 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:19.505 Test: blockdev write read max offset ...passed 00:17:19.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:19.767 Test: blockdev writev readv 8 blocks ...passed 00:17:19.767 Test: blockdev writev readv 30 x 1block ...passed 00:17:19.767 Test: blockdev writev readv block ...passed 00:17:19.767 Test: blockdev writev readv size > 128k ...passed 00:17:19.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:19.767 Test: blockdev comparev and writev ...[2024-05-15 11:03:16.322665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.322689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.322700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.322706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.323163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.323172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.323181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.323187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.323690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.323698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.323707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.323712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.324131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.324140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.324149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.767 [2024-05-15 11:03:16.324154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:19.767 passed 00:17:19.767 Test: blockdev nvme passthru rw ...passed 00:17:19.767 Test: blockdev nvme passthru vendor specific ...[2024-05-15 11:03:16.408449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.767 [2024-05-15 11:03:16.408461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.408798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.767 [2024-05-15 11:03:16.408807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.409132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.767 [2024-05-15 11:03:16.409140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:19.767 [2024-05-15 11:03:16.409455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.767 [2024-05-15 11:03:16.409464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:19.767 passed 00:17:20.028 Test: blockdev nvme admin passthru ...passed 00:17:20.028 Test: blockdev copy ...passed 00:17:20.028 00:17:20.028 Run Summary: Type Total Ran Passed Failed Inactive 00:17:20.028 suites 1 1 n/a 0 0 00:17:20.028 tests 23 23 23 0 0 00:17:20.028 asserts 152 152 152 0 n/a 00:17:20.028 00:17:20.028 Elapsed time = 1.345 seconds 00:17:20.289 11:03:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.289 11:03:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.289 11:03:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.289 11:03:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.289 11:03:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:20.289 11:03:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:20.289 11:03:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:20.289 11:03:16 -- nvmf/common.sh@117 -- # sync 00:17:20.289 11:03:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.289 11:03:16 -- nvmf/common.sh@120 -- # set +e 00:17:20.289 11:03:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.289 11:03:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.289 rmmod nvme_tcp 00:17:20.289 rmmod nvme_fabrics 00:17:20.289 rmmod nvme_keyring 00:17:20.289 11:03:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.289 11:03:16 -- nvmf/common.sh@124 -- # set -e 00:17:20.289 11:03:16 -- nvmf/common.sh@125 -- # return 0 00:17:20.289 11:03:16 -- nvmf/common.sh@478 -- # '[' -n 343118 ']' 00:17:20.289 11:03:16 -- nvmf/common.sh@479 -- # killprocess 343118 00:17:20.289 11:03:16 -- common/autotest_common.sh@946 -- # '[' -z 343118 ']' 00:17:20.289 11:03:16 -- common/autotest_common.sh@950 -- # kill -0 343118 00:17:20.289 11:03:16 -- common/autotest_common.sh@951 -- # uname 00:17:20.289 11:03:16 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:20.289 11:03:16 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 343118 00:17:20.289 11:03:16 -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:20.289 11:03:16 -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:20.289 11:03:16 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 343118' 00:17:20.289 killing process with pid 343118 00:17:20.289 11:03:16 -- common/autotest_common.sh@965 -- # kill 343118 00:17:20.289 [2024-05-15 11:03:16.852878] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:20.289 11:03:16 -- common/autotest_common.sh@970 -- # wait 343118 00:17:20.550 11:03:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:20.550 11:03:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:20.550 11:03:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:20.550 11:03:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.550 11:03:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.550 11:03:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.550 11:03:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.550 11:03:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.096 11:03:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:23.096 00:17:23.096 real 0m11.692s 00:17:23.096 user 0m13.588s 00:17:23.096 sys 0m6.038s 00:17:23.096 11:03:19 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:23.096 11:03:19 -- common/autotest_common.sh@10 -- # set +x 00:17:23.096 ************************************ 00:17:23.096 END TEST nvmf_bdevio_no_huge 00:17:23.096 ************************************ 00:17:23.096 11:03:19 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:23.096 11:03:19 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:23.096 11:03:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:23.096 11:03:19 -- common/autotest_common.sh@10 -- # set +x 00:17:23.096 ************************************ 00:17:23.096 START TEST nvmf_tls 00:17:23.096 ************************************ 00:17:23.096 11:03:19 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:23.096 * Looking for test storage... 00:17:23.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.096 11:03:19 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.096 11:03:19 -- nvmf/common.sh@7 -- # uname -s 00:17:23.096 11:03:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.096 11:03:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.096 11:03:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.096 11:03:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.096 11:03:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.096 11:03:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.096 11:03:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.096 11:03:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.096 11:03:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.096 11:03:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.096 11:03:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.096 11:03:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.096 11:03:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.096 11:03:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.096 11:03:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.096 11:03:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.096 11:03:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.096 11:03:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.096 11:03:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.096 11:03:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.096 11:03:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.096 11:03:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.096 11:03:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.096 11:03:19 -- paths/export.sh@5 -- # export PATH 00:17:23.096 11:03:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.096 11:03:19 -- nvmf/common.sh@47 -- # : 0 00:17:23.096 11:03:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.096 11:03:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.096 11:03:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.096 11:03:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.096 11:03:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.096 11:03:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.096 11:03:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.096 11:03:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.097 11:03:19 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.097 11:03:19 -- target/tls.sh@62 -- # nvmftestinit 00:17:23.097 11:03:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:23.097 11:03:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.097 11:03:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:23.097 11:03:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:23.097 11:03:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:23.097 11:03:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.097 11:03:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.097 11:03:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.097 11:03:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:23.097 11:03:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:23.097 11:03:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.097 11:03:19 -- common/autotest_common.sh@10 -- # set +x 00:17:29.687 11:03:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:29.687 11:03:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.687 11:03:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.687 11:03:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.687 11:03:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.687 11:03:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.687 11:03:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.687 11:03:26 -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.687 11:03:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.687 11:03:26 -- nvmf/common.sh@296 -- # e810=() 00:17:29.687 11:03:26 -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.687 11:03:26 -- nvmf/common.sh@297 -- # x722=() 00:17:29.687 11:03:26 -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.687 11:03:26 -- nvmf/common.sh@298 -- # mlx=() 00:17:29.687 11:03:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.687 11:03:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.687 11:03:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.687 11:03:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.687 11:03:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.687 11:03:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.687 11:03:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:29.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:29.687 11:03:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.687 11:03:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:29.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:29.687 11:03:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.687 11:03:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.687 11:03:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.687 11:03:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.687 11:03:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.687 11:03:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:29.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:29.687 11:03:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.687 11:03:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.687 11:03:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.687 11:03:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.687 11:03:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.687 11:03:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:29.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:29.687 11:03:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.687 11:03:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:29.687 11:03:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:29.687 11:03:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:29.687 11:03:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:29.687 11:03:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.687 11:03:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.687 11:03:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.687 11:03:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.687 11:03:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.687 11:03:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.687 11:03:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.687 11:03:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.687 11:03:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.687 11:03:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.687 11:03:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.687 11:03:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.687 11:03:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.687 11:03:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.687 11:03:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.687 11:03:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.687 11:03:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.687 11:03:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.687 11:03:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.687 11:03:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:17:29.949 00:17:29.949 --- 10.0.0.2 ping statistics --- 00:17:29.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.949 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:17:29.949 11:03:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:17:29.949 00:17:29.949 --- 10.0.0.1 ping statistics --- 00:17:29.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.949 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:29.949 11:03:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.949 11:03:26 -- nvmf/common.sh@411 -- # return 0 00:17:29.949 11:03:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:29.949 11:03:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.949 11:03:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:29.949 11:03:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:29.949 11:03:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.949 11:03:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:29.949 11:03:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:29.949 11:03:26 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:29.949 11:03:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:29.949 11:03:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:29.949 11:03:26 -- common/autotest_common.sh@10 -- # set +x 00:17:29.949 11:03:26 -- nvmf/common.sh@470 -- # nvmfpid=347793 00:17:29.949 11:03:26 -- nvmf/common.sh@471 -- # waitforlisten 347793 00:17:29.949 11:03:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:29.949 11:03:26 -- common/autotest_common.sh@827 -- # '[' -z 347793 ']' 00:17:29.949 11:03:26 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.949 11:03:26 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:29.949 11:03:26 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.949 11:03:26 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:29.949 11:03:26 -- common/autotest_common.sh@10 -- # set +x 00:17:29.949 [2024-05-15 11:03:26.461713] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:29.949 [2024-05-15 11:03:26.461780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.949 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.949 [2024-05-15 11:03:26.524919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.949 [2024-05-15 11:03:26.589527] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.949 [2024-05-15 11:03:26.589564] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.949 [2024-05-15 11:03:26.589570] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.949 [2024-05-15 11:03:26.589575] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.949 [2024-05-15 11:03:26.589579] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.949 [2024-05-15 11:03:26.589599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.893 11:03:27 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:30.893 11:03:27 -- common/autotest_common.sh@860 -- # return 0 00:17:30.893 11:03:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:30.893 11:03:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.893 11:03:27 -- common/autotest_common.sh@10 -- # set +x 00:17:30.893 11:03:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.893 11:03:27 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:30.893 11:03:27 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:30.893 true 00:17:30.893 11:03:27 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.893 11:03:27 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:31.154 11:03:27 -- target/tls.sh@73 -- # version=0 00:17:31.154 11:03:27 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:31.154 11:03:27 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:31.415 11:03:27 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.415 11:03:27 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:31.415 11:03:27 -- target/tls.sh@81 -- # version=13 00:17:31.415 11:03:27 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:31.415 11:03:27 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:31.676 11:03:28 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.676 11:03:28 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:31.676 11:03:28 -- target/tls.sh@89 -- # version=7 00:17:31.676 11:03:28 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:31.937 11:03:28 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.937 11:03:28 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:31.937 11:03:28 -- target/tls.sh@96 -- # ktls=false 00:17:31.937 11:03:28 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:31.937 11:03:28 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:32.200 11:03:28 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.200 11:03:28 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:32.200 11:03:28 -- target/tls.sh@104 -- # ktls=true 00:17:32.200 11:03:28 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:32.200 11:03:28 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:32.462 11:03:28 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.462 11:03:28 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:32.723 11:03:29 -- target/tls.sh@112 -- # ktls=false 00:17:32.723 11:03:29 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:32.723 11:03:29 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:32.723 11:03:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:32.723 11:03:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:32.723 11:03:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:32.723 11:03:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:32.723 11:03:29 -- nvmf/common.sh@693 -- # digest=1 00:17:32.723 11:03:29 -- nvmf/common.sh@694 -- # python - 00:17:32.723 11:03:29 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.723 11:03:29 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:32.723 11:03:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:32.723 11:03:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:32.723 11:03:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:32.723 11:03:29 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:32.723 11:03:29 -- nvmf/common.sh@693 -- # digest=1 00:17:32.723 11:03:29 -- nvmf/common.sh@694 -- # python - 00:17:32.723 11:03:29 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.723 11:03:29 -- target/tls.sh@121 -- # mktemp 00:17:32.723 11:03:29 -- target/tls.sh@121 -- # key_path=/tmp/tmp.WP5Ti1vEmP 00:17:32.723 11:03:29 -- target/tls.sh@122 -- # mktemp 00:17:32.723 11:03:29 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.7CHCe5c1Vn 00:17:32.723 11:03:29 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.723 11:03:29 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.723 11:03:29 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.WP5Ti1vEmP 00:17:32.723 11:03:29 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7CHCe5c1Vn 00:17:32.723 11:03:29 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:32.984 11:03:29 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:33.244 11:03:29 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.WP5Ti1vEmP 00:17:33.244 11:03:29 -- target/tls.sh@49 -- # local key=/tmp/tmp.WP5Ti1vEmP 00:17:33.244 11:03:29 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.244 [2024-05-15 11:03:29.817114] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.244 11:03:29 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:33.506 11:03:29 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:33.506 [2024-05-15 11:03:30.122201] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:33.506 [2024-05-15 11:03:30.122252] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.506 [2024-05-15 11:03:30.122411] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.506 11:03:30 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:33.767 malloc0 00:17:33.767 11:03:30 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:34.029 11:03:30 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WP5Ti1vEmP 00:17:34.029 [2024-05-15 11:03:30.617197] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:34.029 11:03:30 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WP5Ti1vEmP 00:17:34.029 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.260 Initializing NVMe Controllers 00:17:46.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.260 Initialization complete. Launching workers. 00:17:46.260 ======================================================== 00:17:46.260 Latency(us) 00:17:46.260 Device Information : IOPS MiB/s Average min max 00:17:46.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19501.06 76.18 3281.87 1089.32 5049.88 00:17:46.260 ======================================================== 00:17:46.260 Total : 19501.06 76.18 3281.87 1089.32 5049.88 00:17:46.260 00:17:46.260 11:03:40 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WP5Ti1vEmP 00:17:46.260 11:03:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:46.260 11:03:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:46.260 11:03:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:46.260 11:03:40 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WP5Ti1vEmP' 00:17:46.260 11:03:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.260 11:03:40 -- target/tls.sh@28 -- # bdevperf_pid=350534 00:17:46.260 11:03:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.260 11:03:40 -- target/tls.sh@31 -- # waitforlisten 350534 /var/tmp/bdevperf.sock 00:17:46.260 11:03:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.260 11:03:40 -- common/autotest_common.sh@827 -- # '[' -z 350534 ']' 00:17:46.260 11:03:40 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.260 11:03:40 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:46.260 11:03:40 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.260 11:03:40 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:46.260 11:03:40 -- common/autotest_common.sh@10 -- # set +x 00:17:46.260 [2024-05-15 11:03:40.784243] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:46.260 [2024-05-15 11:03:40.784297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350534 ] 00:17:46.260 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.260 [2024-05-15 11:03:40.832894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.260 [2024-05-15 11:03:40.884683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.260 11:03:41 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.260 11:03:41 -- common/autotest_common.sh@860 -- # return 0 00:17:46.260 11:03:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WP5Ti1vEmP 00:17:46.260 [2024-05-15 11:03:41.685874] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.260 [2024-05-15 11:03:41.685928] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:46.260 TLSTESTn1 00:17:46.260 11:03:41 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.260 Running I/O for 10 seconds... 00:17:56.269 00:17:56.269 Latency(us) 00:17:56.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.269 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:56.269 Verification LBA range: start 0x0 length 0x2000 00:17:56.269 TLSTESTn1 : 10.01 3224.01 12.59 0.00 0.00 39663.34 4751.36 193986.56 00:17:56.269 =================================================================================================================== 00:17:56.269 Total : 3224.01 12.59 0.00 0.00 39663.34 4751.36 193986.56 00:17:56.269 0 00:17:56.269 11:03:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.269 11:03:51 -- target/tls.sh@45 -- # killprocess 350534 00:17:56.269 11:03:51 -- common/autotest_common.sh@946 -- # '[' -z 350534 ']' 00:17:56.269 11:03:51 -- common/autotest_common.sh@950 -- # kill -0 350534 00:17:56.269 11:03:51 -- common/autotest_common.sh@951 -- # uname 00:17:56.269 11:03:51 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.269 11:03:51 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 350534 00:17:56.269 11:03:51 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:56.269 11:03:51 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:56.269 11:03:51 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 350534' 00:17:56.269 killing process with pid 350534 00:17:56.269 11:03:51 -- common/autotest_common.sh@965 -- # kill 350534 00:17:56.269 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.269 00:17:56.269 Latency(us) 00:17:56.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.269 =================================================================================================================== 00:17:56.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.269 [2024-05-15 11:03:51.980970] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:56.269 11:03:51 -- common/autotest_common.sh@970 -- # wait 350534 00:17:56.269 11:03:52 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7CHCe5c1Vn 00:17:56.269 11:03:52 -- common/autotest_common.sh@648 -- # local es=0 00:17:56.269 11:03:52 -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7CHCe5c1Vn 00:17:56.269 11:03:52 -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:56.269 11:03:52 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.269 11:03:52 -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:56.269 11:03:52 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.269 11:03:52 -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7CHCe5c1Vn 00:17:56.269 11:03:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.269 11:03:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.269 11:03:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.269 11:03:52 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7CHCe5c1Vn' 00:17:56.269 11:03:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.269 11:03:52 -- target/tls.sh@28 -- # bdevperf_pid=352864 00:17:56.269 11:03:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.269 11:03:52 -- target/tls.sh@31 -- # waitforlisten 352864 /var/tmp/bdevperf.sock 00:17:56.269 11:03:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.269 11:03:52 -- common/autotest_common.sh@827 -- # '[' -z 352864 ']' 00:17:56.269 11:03:52 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.269 11:03:52 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:56.269 11:03:52 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.269 11:03:52 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:56.269 11:03:52 -- common/autotest_common.sh@10 -- # set +x 00:17:56.269 [2024-05-15 11:03:52.141978] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:56.269 [2024-05-15 11:03:52.142030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352864 ] 00:17:56.269 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.269 [2024-05-15 11:03:52.190691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.269 [2024-05-15 11:03:52.242287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.530 11:03:52 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:56.530 11:03:52 -- common/autotest_common.sh@860 -- # return 0 00:17:56.530 11:03:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7CHCe5c1Vn 00:17:56.530 [2024-05-15 11:03:53.063423] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.530 [2024-05-15 11:03:53.063480] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:56.530 [2024-05-15 11:03:53.067772] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:56.530 [2024-05-15 11:03:53.068408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188b0a0 (107): Transport endpoint is not connected 00:17:56.530 [2024-05-15 11:03:53.069403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188b0a0 (9): Bad file descriptor 00:17:56.530 [2024-05-15 11:03:53.070405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:56.530 [2024-05-15 11:03:53.070413] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:56.530 [2024-05-15 11:03:53.070418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:56.530 request: 00:17:56.530 { 00:17:56.530 "name": "TLSTEST", 00:17:56.530 "trtype": "tcp", 00:17:56.530 "traddr": "10.0.0.2", 00:17:56.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.530 "adrfam": "ipv4", 00:17:56.530 "trsvcid": "4420", 00:17:56.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.530 "psk": "/tmp/tmp.7CHCe5c1Vn", 00:17:56.530 "method": "bdev_nvme_attach_controller", 00:17:56.530 "req_id": 1 00:17:56.530 } 00:17:56.530 Got JSON-RPC error response 00:17:56.530 response: 00:17:56.530 { 00:17:56.530 "code": -32602, 00:17:56.530 "message": "Invalid parameters" 00:17:56.530 } 00:17:56.530 11:03:53 -- target/tls.sh@36 -- # killprocess 352864 00:17:56.530 11:03:53 -- common/autotest_common.sh@946 -- # '[' -z 352864 ']' 00:17:56.530 11:03:53 -- common/autotest_common.sh@950 -- # kill -0 352864 00:17:56.530 11:03:53 -- common/autotest_common.sh@951 -- # uname 00:17:56.530 11:03:53 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.530 11:03:53 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 352864 00:17:56.530 11:03:53 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:56.530 11:03:53 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:56.530 11:03:53 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 352864' 00:17:56.530 killing process with pid 352864 00:17:56.530 11:03:53 -- common/autotest_common.sh@965 -- # kill 352864 00:17:56.530 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.530 00:17:56.530 Latency(us) 00:17:56.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.530 =================================================================================================================== 00:17:56.530 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.530 [2024-05-15 11:03:53.135213] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:56.530 11:03:53 -- common/autotest_common.sh@970 -- # wait 352864 00:17:56.937 11:03:53 -- target/tls.sh@37 -- # return 1 00:17:56.937 11:03:53 -- common/autotest_common.sh@651 -- # es=1 00:17:56.937 11:03:53 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.937 11:03:53 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.937 11:03:53 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.937 11:03:53 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WP5Ti1vEmP 00:17:56.937 11:03:53 -- common/autotest_common.sh@648 -- # local es=0 00:17:56.937 11:03:53 -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WP5Ti1vEmP 00:17:56.937 11:03:53 -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:56.937 11:03:53 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.937 11:03:53 -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:56.937 11:03:53 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.937 11:03:53 -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WP5Ti1vEmP 00:17:56.937 11:03:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.937 11:03:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.937 11:03:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:56.937 11:03:53 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WP5Ti1vEmP' 00:17:56.937 11:03:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.937 11:03:53 -- target/tls.sh@28 -- # bdevperf_pid=353038 00:17:56.937 11:03:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.937 11:03:53 -- target/tls.sh@31 -- # waitforlisten 353038 /var/tmp/bdevperf.sock 00:17:56.937 11:03:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.937 11:03:53 -- common/autotest_common.sh@827 -- # '[' -z 353038 ']' 00:17:56.937 11:03:53 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.937 11:03:53 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:56.937 11:03:53 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.937 11:03:53 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:56.937 11:03:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.937 [2024-05-15 11:03:53.288154] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:56.937 [2024-05-15 11:03:53.288207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353038 ] 00:17:56.937 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.937 [2024-05-15 11:03:53.337791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.937 [2024-05-15 11:03:53.389187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.595 11:03:54 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.595 11:03:54 -- common/autotest_common.sh@860 -- # return 0 00:17:57.595 11:03:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.WP5Ti1vEmP 00:17:57.595 [2024-05-15 11:03:54.178214] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.595 [2024-05-15 11:03:54.178267] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:57.595 [2024-05-15 11:03:54.188334] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.595 [2024-05-15 11:03:54.188353] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.595 [2024-05-15 11:03:54.188371] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.595 [2024-05-15 11:03:54.189197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20950a0 (107): Transport endpoint is not connected 00:17:57.595 [2024-05-15 11:03:54.190193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20950a0 (9): Bad file descriptor 00:17:57.595 [2024-05-15 11:03:54.191194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.595 [2024-05-15 11:03:54.191202] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.595 [2024-05-15 11:03:54.191207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.595 request: 00:17:57.595 { 00:17:57.595 "name": "TLSTEST", 00:17:57.595 "trtype": "tcp", 00:17:57.595 "traddr": "10.0.0.2", 00:17:57.595 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:57.595 "adrfam": "ipv4", 00:17:57.595 "trsvcid": "4420", 00:17:57.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.595 "psk": "/tmp/tmp.WP5Ti1vEmP", 00:17:57.595 "method": "bdev_nvme_attach_controller", 00:17:57.595 "req_id": 1 00:17:57.595 } 00:17:57.595 Got JSON-RPC error response 00:17:57.595 response: 00:17:57.595 { 00:17:57.595 "code": -32602, 00:17:57.595 "message": "Invalid parameters" 00:17:57.595 } 00:17:57.595 11:03:54 -- target/tls.sh@36 -- # killprocess 353038 00:17:57.595 11:03:54 -- common/autotest_common.sh@946 -- # '[' -z 353038 ']' 00:17:57.595 11:03:54 -- common/autotest_common.sh@950 -- # kill -0 353038 00:17:57.595 11:03:54 -- common/autotest_common.sh@951 -- # uname 00:17:57.595 11:03:54 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:57.595 11:03:54 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 353038 00:17:57.859 11:03:54 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:57.859 11:03:54 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:57.859 11:03:54 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 353038' 00:17:57.859 killing process with pid 353038 00:17:57.859 11:03:54 -- common/autotest_common.sh@965 -- # kill 353038 00:17:57.859 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.859 00:17:57.859 Latency(us) 00:17:57.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.859 =================================================================================================================== 00:17:57.859 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.859 [2024-05-15 11:03:54.264175] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:57.859 11:03:54 -- common/autotest_common.sh@970 -- # wait 353038 00:17:57.859 11:03:54 -- target/tls.sh@37 -- # return 1 00:17:57.859 11:03:54 -- common/autotest_common.sh@651 -- # es=1 00:17:57.859 11:03:54 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:57.859 11:03:54 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:57.859 11:03:54 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:57.859 11:03:54 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WP5Ti1vEmP 00:17:57.859 11:03:54 -- common/autotest_common.sh@648 -- # local es=0 00:17:57.859 11:03:54 -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WP5Ti1vEmP 00:17:57.859 11:03:54 -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:57.859 11:03:54 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.859 11:03:54 -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:57.859 11:03:54 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.859 11:03:54 -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WP5Ti1vEmP 00:17:57.859 11:03:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.859 11:03:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:57.859 11:03:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.859 11:03:54 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WP5Ti1vEmP' 00:17:57.859 11:03:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.859 11:03:54 -- target/tls.sh@28 -- # bdevperf_pid=353235 00:17:57.859 11:03:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.859 11:03:54 -- target/tls.sh@31 -- # waitforlisten 353235 /var/tmp/bdevperf.sock 00:17:57.859 11:03:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.859 11:03:54 -- common/autotest_common.sh@827 -- # '[' -z 353235 ']' 00:17:57.859 11:03:54 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.859 11:03:54 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:57.859 11:03:54 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.859 11:03:54 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:57.859 11:03:54 -- common/autotest_common.sh@10 -- # set +x 00:17:57.860 [2024-05-15 11:03:54.417852] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:57.860 [2024-05-15 11:03:54.417910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353235 ] 00:17:57.860 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.860 [2024-05-15 11:03:54.466760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.127 [2024-05-15 11:03:54.518134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.728 11:03:55 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:58.728 11:03:55 -- common/autotest_common.sh@860 -- # return 0 00:17:58.728 11:03:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WP5Ti1vEmP 00:17:58.728 [2024-05-15 11:03:55.323310] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.728 [2024-05-15 11:03:55.323369] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.728 [2024-05-15 11:03:55.331113] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.728 [2024-05-15 11:03:55.331130] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.728 [2024-05-15 11:03:55.331148] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.728 [2024-05-15 11:03:55.331316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f670a0 (107): Transport endpoint is not connected 00:17:58.728 [2024-05-15 11:03:55.332305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f670a0 (9): Bad file descriptor 00:17:58.728 [2024-05-15 11:03:55.333307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:58.728 [2024-05-15 11:03:55.333314] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.728 [2024-05-15 11:03:55.333319] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:58.728 request: 00:17:58.728 { 00:17:58.728 "name": "TLSTEST", 00:17:58.728 "trtype": "tcp", 00:17:58.728 "traddr": "10.0.0.2", 00:17:58.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.728 "adrfam": "ipv4", 00:17:58.728 "trsvcid": "4420", 00:17:58.728 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:58.728 "psk": "/tmp/tmp.WP5Ti1vEmP", 00:17:58.728 "method": "bdev_nvme_attach_controller", 00:17:58.728 "req_id": 1 00:17:58.728 } 00:17:58.728 Got JSON-RPC error response 00:17:58.728 response: 00:17:58.728 { 00:17:58.728 "code": -32602, 00:17:58.728 "message": "Invalid parameters" 00:17:58.728 } 00:17:58.728 11:03:55 -- target/tls.sh@36 -- # killprocess 353235 00:17:58.728 11:03:55 -- common/autotest_common.sh@946 -- # '[' -z 353235 ']' 00:17:58.728 11:03:55 -- common/autotest_common.sh@950 -- # kill -0 353235 00:17:58.728 11:03:55 -- common/autotest_common.sh@951 -- # uname 00:17:58.728 11:03:55 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.728 11:03:55 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 353235 00:17:59.005 11:03:55 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:59.005 11:03:55 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:59.005 11:03:55 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 353235' 00:17:59.005 killing process with pid 353235 00:17:59.005 11:03:55 -- common/autotest_common.sh@965 -- # kill 353235 00:17:59.005 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.005 00:17:59.005 Latency(us) 00:17:59.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.005 =================================================================================================================== 00:17:59.005 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.005 [2024-05-15 11:03:55.403698] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:59.005 11:03:55 -- common/autotest_common.sh@970 -- # wait 353235 00:17:59.005 11:03:55 -- target/tls.sh@37 -- # return 1 00:17:59.005 11:03:55 -- common/autotest_common.sh@651 -- # es=1 00:17:59.005 11:03:55 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:59.005 11:03:55 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:59.005 11:03:55 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:59.005 11:03:55 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.005 11:03:55 -- common/autotest_common.sh@648 -- # local es=0 00:17:59.005 11:03:55 -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.005 11:03:55 -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:59.005 11:03:55 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.005 11:03:55 -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:59.005 11:03:55 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.005 11:03:55 -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.005 11:03:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.005 11:03:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.005 11:03:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.005 11:03:55 -- target/tls.sh@23 -- # psk= 00:17:59.005 11:03:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.005 11:03:55 -- target/tls.sh@28 -- # bdevperf_pid=353577 00:17:59.005 11:03:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.005 11:03:55 -- target/tls.sh@31 -- # waitforlisten 353577 /var/tmp/bdevperf.sock 00:17:59.005 11:03:55 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.005 11:03:55 -- common/autotest_common.sh@827 -- # '[' -z 353577 ']' 00:17:59.005 11:03:55 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.005 11:03:55 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.005 11:03:55 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.005 11:03:55 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.005 11:03:55 -- common/autotest_common.sh@10 -- # set +x 00:17:59.005 [2024-05-15 11:03:55.558534] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:17:59.005 [2024-05-15 11:03:55.558592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353577 ] 00:17:59.005 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.005 [2024-05-15 11:03:55.607550] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.271 [2024-05-15 11:03:55.658922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.862 11:03:56 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:59.862 11:03:56 -- common/autotest_common.sh@860 -- # return 0 00:17:59.862 11:03:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:59.862 [2024-05-15 11:03:56.453859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:59.862 [2024-05-15 11:03:56.455399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238bb00 (9): Bad file descriptor 00:17:59.862 [2024-05-15 11:03:56.456399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:59.862 [2024-05-15 11:03:56.456407] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:59.862 [2024-05-15 11:03:56.456412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:59.862 request: 00:17:59.862 { 00:17:59.862 "name": "TLSTEST", 00:17:59.862 "trtype": "tcp", 00:17:59.862 "traddr": "10.0.0.2", 00:17:59.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.862 "adrfam": "ipv4", 00:17:59.862 "trsvcid": "4420", 00:17:59.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.862 "method": "bdev_nvme_attach_controller", 00:17:59.862 "req_id": 1 00:17:59.862 } 00:17:59.862 Got JSON-RPC error response 00:17:59.862 response: 00:17:59.862 { 00:17:59.862 "code": -32602, 00:17:59.862 "message": "Invalid parameters" 00:17:59.862 } 00:17:59.862 11:03:56 -- target/tls.sh@36 -- # killprocess 353577 00:17:59.862 11:03:56 -- common/autotest_common.sh@946 -- # '[' -z 353577 ']' 00:17:59.862 11:03:56 -- common/autotest_common.sh@950 -- # kill -0 353577 00:17:59.862 11:03:56 -- common/autotest_common.sh@951 -- # uname 00:17:59.862 11:03:56 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:59.862 11:03:56 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 353577 00:18:00.138 11:03:56 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:00.138 11:03:56 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:00.138 11:03:56 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 353577' 00:18:00.138 killing process with pid 353577 00:18:00.138 11:03:56 -- common/autotest_common.sh@965 -- # kill 353577 00:18:00.138 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.138 00:18:00.138 Latency(us) 00:18:00.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.138 =================================================================================================================== 00:18:00.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.138 11:03:56 -- common/autotest_common.sh@970 -- # wait 353577 00:18:00.138 11:03:56 -- target/tls.sh@37 -- # return 1 00:18:00.138 11:03:56 -- common/autotest_common.sh@651 -- # es=1 00:18:00.138 11:03:56 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:00.138 11:03:56 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:00.138 11:03:56 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:00.138 11:03:56 -- target/tls.sh@158 -- # killprocess 347793 00:18:00.138 11:03:56 -- common/autotest_common.sh@946 -- # '[' -z 347793 ']' 00:18:00.138 11:03:56 -- common/autotest_common.sh@950 -- # kill -0 347793 00:18:00.138 11:03:56 -- common/autotest_common.sh@951 -- # uname 00:18:00.138 11:03:56 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:00.139 11:03:56 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 347793 00:18:00.139 11:03:56 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:00.139 11:03:56 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:00.139 11:03:56 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 347793' 00:18:00.139 killing process with pid 347793 00:18:00.139 11:03:56 -- common/autotest_common.sh@965 -- # kill 347793 00:18:00.139 [2024-05-15 11:03:56.690051] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:00.139 [2024-05-15 11:03:56.690073] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:00.139 11:03:56 -- common/autotest_common.sh@970 -- # wait 347793 00:18:00.417 11:03:56 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.417 11:03:56 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.417 11:03:56 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:00.417 11:03:56 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:00.417 11:03:56 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:00.417 11:03:56 -- nvmf/common.sh@693 -- # digest=2 00:18:00.417 11:03:56 -- nvmf/common.sh@694 -- # python - 00:18:00.417 11:03:56 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.417 11:03:56 -- target/tls.sh@160 -- # mktemp 00:18:00.417 11:03:56 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.1w7s70vSTv 00:18:00.417 11:03:56 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.417 11:03:56 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.1w7s70vSTv 00:18:00.417 11:03:56 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:00.417 11:03:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:00.417 11:03:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:00.417 11:03:56 -- common/autotest_common.sh@10 -- # set +x 00:18:00.417 11:03:56 -- nvmf/common.sh@470 -- # nvmfpid=353858 00:18:00.417 11:03:56 -- nvmf/common.sh@471 -- # waitforlisten 353858 00:18:00.417 11:03:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.417 11:03:56 -- common/autotest_common.sh@827 -- # '[' -z 353858 ']' 00:18:00.417 11:03:56 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.417 11:03:56 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.417 11:03:56 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.417 11:03:56 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.417 11:03:56 -- common/autotest_common.sh@10 -- # set +x 00:18:00.417 [2024-05-15 11:03:56.922606] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:00.417 [2024-05-15 11:03:56.922657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.417 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.417 [2024-05-15 11:03:57.001331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.417 [2024-05-15 11:03:57.055714] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.417 [2024-05-15 11:03:57.055747] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.417 [2024-05-15 11:03:57.055756] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.417 [2024-05-15 11:03:57.055760] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.417 [2024-05-15 11:03:57.055764] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.417 [2024-05-15 11:03:57.055780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.707 11:03:57 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.707 11:03:57 -- common/autotest_common.sh@860 -- # return 0 00:18:00.707 11:03:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:00.707 11:03:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.707 11:03:57 -- common/autotest_common.sh@10 -- # set +x 00:18:00.707 11:03:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.707 11:03:57 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.1w7s70vSTv 00:18:00.707 11:03:57 -- target/tls.sh@49 -- # local key=/tmp/tmp.1w7s70vSTv 00:18:00.707 11:03:57 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.707 [2024-05-15 11:03:57.304341] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.707 11:03:57 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.009 11:03:57 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.009 [2024-05-15 11:03:57.597071] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:01.009 [2024-05-15 11:03:57.597129] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.009 [2024-05-15 11:03:57.597330] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.009 11:03:57 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.282 malloc0 00:18:01.282 11:03:57 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.282 11:03:57 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1w7s70vSTv 00:18:01.544 [2024-05-15 11:03:58.024559] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:01.544 11:03:58 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1w7s70vSTv 00:18:01.544 11:03:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.544 11:03:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.544 11:03:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.545 11:03:58 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1w7s70vSTv' 00:18:01.545 11:03:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.545 11:03:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.545 11:03:58 -- target/tls.sh@28 -- # bdevperf_pid=353982 00:18:01.545 11:03:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.545 11:03:58 -- target/tls.sh@31 -- # waitforlisten 353982 /var/tmp/bdevperf.sock 00:18:01.545 11:03:58 -- common/autotest_common.sh@827 -- # '[' -z 353982 ']' 00:18:01.545 11:03:58 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.545 11:03:58 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.545 11:03:58 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.545 11:03:58 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.545 11:03:58 -- common/autotest_common.sh@10 -- # set +x 00:18:01.545 [2024-05-15 11:03:58.072443] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:01.545 [2024-05-15 11:03:58.072490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353982 ] 00:18:01.545 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.545 [2024-05-15 11:03:58.122434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.545 [2024-05-15 11:03:58.174795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.502 11:03:58 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.502 11:03:58 -- common/autotest_common.sh@860 -- # return 0 00:18:02.502 11:03:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1w7s70vSTv 00:18:02.502 [2024-05-15 11:03:58.987894] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.502 [2024-05-15 11:03:58.987954] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.502 TLSTESTn1 00:18:02.502 11:03:59 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:02.771 Running I/O for 10 seconds... 00:18:13.039 00:18:13.039 Latency(us) 00:18:13.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.039 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.039 Verification LBA range: start 0x0 length 0x2000 00:18:13.039 TLSTESTn1 : 10.10 3336.34 13.03 0.00 0.00 38199.96 5024.43 94808.75 00:18:13.039 =================================================================================================================== 00:18:13.039 Total : 3336.34 13.03 0.00 0.00 38199.96 5024.43 94808.75 00:18:13.039 0 00:18:13.039 11:04:09 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.039 11:04:09 -- target/tls.sh@45 -- # killprocess 353982 00:18:13.039 11:04:09 -- common/autotest_common.sh@946 -- # '[' -z 353982 ']' 00:18:13.039 11:04:09 -- common/autotest_common.sh@950 -- # kill -0 353982 00:18:13.039 11:04:09 -- common/autotest_common.sh@951 -- # uname 00:18:13.039 11:04:09 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:13.039 11:04:09 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 353982 00:18:13.039 11:04:09 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:13.039 11:04:09 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:13.039 11:04:09 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 353982' 00:18:13.039 killing process with pid 353982 00:18:13.039 11:04:09 -- common/autotest_common.sh@965 -- # kill 353982 00:18:13.039 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.039 00:18:13.039 Latency(us) 00:18:13.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.039 =================================================================================================================== 00:18:13.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.040 [2024-05-15 11:04:09.358385] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:13.040 11:04:09 -- common/autotest_common.sh@970 -- # wait 353982 00:18:13.040 11:04:09 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.1w7s70vSTv 00:18:13.040 11:04:09 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1w7s70vSTv 00:18:13.040 11:04:09 -- common/autotest_common.sh@648 -- # local es=0 00:18:13.040 11:04:09 -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1w7s70vSTv 00:18:13.040 11:04:09 -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:13.040 11:04:09 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:13.040 11:04:09 -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:13.040 11:04:09 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:13.040 11:04:09 -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1w7s70vSTv 00:18:13.040 11:04:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.040 11:04:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.040 11:04:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.040 11:04:09 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1w7s70vSTv' 00:18:13.040 11:04:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.040 11:04:09 -- target/tls.sh@28 -- # bdevperf_pid=356330 00:18:13.040 11:04:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.040 11:04:09 -- target/tls.sh@31 -- # waitforlisten 356330 /var/tmp/bdevperf.sock 00:18:13.040 11:04:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.040 11:04:09 -- common/autotest_common.sh@827 -- # '[' -z 356330 ']' 00:18:13.040 11:04:09 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.040 11:04:09 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:13.040 11:04:09 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.040 11:04:09 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:13.040 11:04:09 -- common/autotest_common.sh@10 -- # set +x 00:18:13.040 [2024-05-15 11:04:09.524860] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:13.040 [2024-05-15 11:04:09.524914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356330 ] 00:18:13.040 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.040 [2024-05-15 11:04:09.574382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.040 [2024-05-15 11:04:09.624669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.012 11:04:10 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:14.012 11:04:10 -- common/autotest_common.sh@860 -- # return 0 00:18:14.012 11:04:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1w7s70vSTv 00:18:14.012 [2024-05-15 11:04:10.441870] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.012 [2024-05-15 11:04:10.441914] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:14.012 [2024-05-15 11:04:10.441920] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.1w7s70vSTv 00:18:14.012 request: 00:18:14.012 { 00:18:14.012 "name": "TLSTEST", 00:18:14.012 "trtype": "tcp", 00:18:14.012 "traddr": "10.0.0.2", 00:18:14.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.012 "adrfam": "ipv4", 00:18:14.012 "trsvcid": "4420", 00:18:14.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.012 "psk": "/tmp/tmp.1w7s70vSTv", 00:18:14.012 "method": "bdev_nvme_attach_controller", 00:18:14.012 "req_id": 1 00:18:14.012 } 00:18:14.012 Got JSON-RPC error response 00:18:14.012 response: 00:18:14.012 { 00:18:14.012 "code": -1, 00:18:14.012 "message": "Operation not permitted" 00:18:14.012 } 00:18:14.012 11:04:10 -- target/tls.sh@36 -- # killprocess 356330 00:18:14.012 11:04:10 -- common/autotest_common.sh@946 -- # '[' -z 356330 ']' 00:18:14.012 11:04:10 -- common/autotest_common.sh@950 -- # kill -0 356330 00:18:14.012 11:04:10 -- common/autotest_common.sh@951 -- # uname 00:18:14.012 11:04:10 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.012 11:04:10 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 356330 00:18:14.012 11:04:10 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:14.012 11:04:10 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:14.012 11:04:10 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 356330' 00:18:14.012 killing process with pid 356330 00:18:14.012 11:04:10 -- common/autotest_common.sh@965 -- # kill 356330 00:18:14.012 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.012 00:18:14.012 Latency(us) 00:18:14.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.012 =================================================================================================================== 00:18:14.012 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.012 11:04:10 -- common/autotest_common.sh@970 -- # wait 356330 00:18:14.012 11:04:10 -- target/tls.sh@37 -- # return 1 00:18:14.012 11:04:10 -- common/autotest_common.sh@651 -- # es=1 00:18:14.012 11:04:10 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:14.012 11:04:10 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:14.012 11:04:10 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:14.012 11:04:10 -- target/tls.sh@174 -- # killprocess 353858 00:18:14.012 11:04:10 -- common/autotest_common.sh@946 -- # '[' -z 353858 ']' 00:18:14.012 11:04:10 -- common/autotest_common.sh@950 -- # kill -0 353858 00:18:14.012 11:04:10 -- common/autotest_common.sh@951 -- # uname 00:18:14.012 11:04:10 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.012 11:04:10 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 353858 00:18:14.294 11:04:10 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:14.294 11:04:10 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:14.294 11:04:10 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 353858' 00:18:14.294 killing process with pid 353858 00:18:14.294 11:04:10 -- common/autotest_common.sh@965 -- # kill 353858 00:18:14.294 [2024-05-15 11:04:10.691269] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:14.294 [2024-05-15 11:04:10.691302] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:14.294 11:04:10 -- common/autotest_common.sh@970 -- # wait 353858 00:18:14.294 11:04:10 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:14.294 11:04:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:14.294 11:04:10 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:14.294 11:04:10 -- common/autotest_common.sh@10 -- # set +x 00:18:14.294 11:04:10 -- nvmf/common.sh@470 -- # nvmfpid=356685 00:18:14.294 11:04:10 -- nvmf/common.sh@471 -- # waitforlisten 356685 00:18:14.294 11:04:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.294 11:04:10 -- common/autotest_common.sh@827 -- # '[' -z 356685 ']' 00:18:14.294 11:04:10 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.294 11:04:10 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:14.294 11:04:10 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.294 11:04:10 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:14.294 11:04:10 -- common/autotest_common.sh@10 -- # set +x 00:18:14.294 [2024-05-15 11:04:10.864005] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:14.294 [2024-05-15 11:04:10.864056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.294 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.584 [2024-05-15 11:04:10.942968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.584 [2024-05-15 11:04:10.996252] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.584 [2024-05-15 11:04:10.996285] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.584 [2024-05-15 11:04:10.996294] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.584 [2024-05-15 11:04:10.996298] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.584 [2024-05-15 11:04:10.996302] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.584 [2024-05-15 11:04:10.996316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.211 11:04:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:15.211 11:04:11 -- common/autotest_common.sh@860 -- # return 0 00:18:15.211 11:04:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:15.211 11:04:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.211 11:04:11 -- common/autotest_common.sh@10 -- # set +x 00:18:15.211 11:04:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.211 11:04:11 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.1w7s70vSTv 00:18:15.211 11:04:11 -- common/autotest_common.sh@648 -- # local es=0 00:18:15.211 11:04:11 -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.1w7s70vSTv 00:18:15.211 11:04:11 -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:15.211 11:04:11 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.211 11:04:11 -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:15.211 11:04:11 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.211 11:04:11 -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.1w7s70vSTv 00:18:15.211 11:04:11 -- target/tls.sh@49 -- # local key=/tmp/tmp.1w7s70vSTv 00:18:15.211 11:04:11 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.211 [2024-05-15 11:04:11.810410] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.211 11:04:11 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.477 11:04:11 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.745 [2024-05-15 11:04:12.151231] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:15.745 [2024-05-15 11:04:12.151270] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.745 [2024-05-15 11:04:12.151422] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.745 11:04:12 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.745 malloc0 00:18:15.745 11:04:12 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.014 11:04:12 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1w7s70vSTv 00:18:16.014 [2024-05-15 11:04:12.598271] tcp.c:3567:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:16.014 [2024-05-15 11:04:12.598289] tcp.c:3653:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:16.014 [2024-05-15 11:04:12.598310] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:16.014 request: 00:18:16.014 { 00:18:16.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.014 "host": "nqn.2016-06.io.spdk:host1", 00:18:16.014 "psk": "/tmp/tmp.1w7s70vSTv", 00:18:16.014 "method": "nvmf_subsystem_add_host", 00:18:16.014 "req_id": 1 00:18:16.014 } 00:18:16.014 Got JSON-RPC error response 00:18:16.014 response: 00:18:16.014 { 00:18:16.014 "code": -32603, 00:18:16.014 "message": "Internal error" 00:18:16.014 } 00:18:16.014 11:04:12 -- common/autotest_common.sh@651 -- # es=1 00:18:16.014 11:04:12 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:16.014 11:04:12 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:16.014 11:04:12 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:16.014 11:04:12 -- target/tls.sh@180 -- # killprocess 356685 00:18:16.014 11:04:12 -- common/autotest_common.sh@946 -- # '[' -z 356685 ']' 00:18:16.014 11:04:12 -- common/autotest_common.sh@950 -- # kill -0 356685 00:18:16.014 11:04:12 -- common/autotest_common.sh@951 -- # uname 00:18:16.014 11:04:12 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:16.014 11:04:12 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 356685 00:18:16.287 11:04:12 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:16.287 11:04:12 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:16.287 11:04:12 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 356685' 00:18:16.287 killing process with pid 356685 00:18:16.287 11:04:12 -- common/autotest_common.sh@965 -- # kill 356685 00:18:16.287 [2024-05-15 11:04:12.669476] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:16.287 11:04:12 -- common/autotest_common.sh@970 -- # wait 356685 00:18:16.287 11:04:12 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.1w7s70vSTv 00:18:16.287 11:04:12 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:16.287 11:04:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:16.287 11:04:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:16.287 11:04:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.287 11:04:12 -- nvmf/common.sh@470 -- # nvmfpid=357064 00:18:16.287 11:04:12 -- nvmf/common.sh@471 -- # waitforlisten 357064 00:18:16.287 11:04:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:16.287 11:04:12 -- common/autotest_common.sh@827 -- # '[' -z 357064 ']' 00:18:16.287 11:04:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.287 11:04:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:16.287 11:04:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.287 11:04:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:16.287 11:04:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.287 [2024-05-15 11:04:12.849715] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:16.287 [2024-05-15 11:04:12.849767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.287 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.287 [2024-05-15 11:04:12.930624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.556 [2024-05-15 11:04:12.981886] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.556 [2024-05-15 11:04:12.981922] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.556 [2024-05-15 11:04:12.981927] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.556 [2024-05-15 11:04:12.981932] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.556 [2024-05-15 11:04:12.981936] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.556 [2024-05-15 11:04:12.981952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.150 11:04:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:17.150 11:04:13 -- common/autotest_common.sh@860 -- # return 0 00:18:17.150 11:04:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:17.150 11:04:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.150 11:04:13 -- common/autotest_common.sh@10 -- # set +x 00:18:17.150 11:04:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.150 11:04:13 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.1w7s70vSTv 00:18:17.150 11:04:13 -- target/tls.sh@49 -- # local key=/tmp/tmp.1w7s70vSTv 00:18:17.150 11:04:13 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:17.420 [2024-05-15 11:04:13.823977] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.420 11:04:13 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:17.420 11:04:13 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:17.689 [2024-05-15 11:04:14.124692] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:17.689 [2024-05-15 11:04:14.124732] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.689 [2024-05-15 11:04:14.124885] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.689 11:04:14 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.689 malloc0 00:18:17.689 11:04:14 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.959 11:04:14 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1w7s70vSTv 00:18:17.959 [2024-05-15 11:04:14.555554] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:17.959 11:04:14 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.959 11:04:14 -- target/tls.sh@188 -- # bdevperf_pid=357429 00:18:17.959 11:04:14 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.959 11:04:14 -- target/tls.sh@191 -- # waitforlisten 357429 /var/tmp/bdevperf.sock 00:18:17.959 11:04:14 -- common/autotest_common.sh@827 -- # '[' -z 357429 ']' 00:18:17.959 11:04:14 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.959 11:04:14 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:17.959 11:04:14 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.959 11:04:14 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:17.959 11:04:14 -- common/autotest_common.sh@10 -- # set +x 00:18:17.959 [2024-05-15 11:04:14.600278] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:17.959 [2024-05-15 11:04:14.600328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357429 ] 00:18:18.229 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.229 [2024-05-15 11:04:14.649501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.229 [2024-05-15 11:04:14.701143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.229 11:04:14 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:18.229 11:04:14 -- common/autotest_common.sh@860 -- # return 0 00:18:18.229 11:04:14 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1w7s70vSTv 00:18:18.499 [2024-05-15 11:04:14.920789] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.499 [2024-05-15 11:04:14.920852] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.499 TLSTESTn1 00:18:18.499 11:04:15 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:18.769 11:04:15 -- target/tls.sh@196 -- # tgtconf='{ 00:18:18.769 "subsystems": [ 00:18:18.769 { 00:18:18.769 "subsystem": "keyring", 00:18:18.769 "config": [] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "iobuf", 00:18:18.769 "config": [ 00:18:18.769 { 00:18:18.769 "method": "iobuf_set_options", 00:18:18.769 "params": { 00:18:18.769 "small_pool_count": 8192, 00:18:18.769 "large_pool_count": 1024, 00:18:18.769 "small_bufsize": 8192, 00:18:18.769 "large_bufsize": 135168 00:18:18.769 } 00:18:18.769 } 00:18:18.769 ] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "sock", 00:18:18.769 "config": [ 00:18:18.769 { 00:18:18.769 "method": "sock_impl_set_options", 00:18:18.769 "params": { 00:18:18.769 "impl_name": "posix", 00:18:18.769 "recv_buf_size": 2097152, 00:18:18.769 "send_buf_size": 2097152, 00:18:18.769 "enable_recv_pipe": true, 00:18:18.769 "enable_quickack": false, 00:18:18.769 "enable_placement_id": 0, 00:18:18.769 "enable_zerocopy_send_server": true, 00:18:18.769 "enable_zerocopy_send_client": false, 00:18:18.769 "zerocopy_threshold": 0, 00:18:18.769 "tls_version": 0, 00:18:18.769 "enable_ktls": false 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "sock_impl_set_options", 00:18:18.769 "params": { 00:18:18.769 "impl_name": "ssl", 00:18:18.769 "recv_buf_size": 4096, 00:18:18.769 "send_buf_size": 4096, 00:18:18.769 "enable_recv_pipe": true, 00:18:18.769 "enable_quickack": false, 00:18:18.769 "enable_placement_id": 0, 00:18:18.769 "enable_zerocopy_send_server": true, 00:18:18.769 "enable_zerocopy_send_client": false, 00:18:18.769 "zerocopy_threshold": 0, 00:18:18.769 "tls_version": 0, 00:18:18.769 "enable_ktls": false 00:18:18.769 } 00:18:18.769 } 00:18:18.769 ] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "vmd", 00:18:18.769 "config": [] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "accel", 00:18:18.769 "config": [ 00:18:18.769 { 00:18:18.769 "method": "accel_set_options", 00:18:18.769 "params": { 00:18:18.769 "small_cache_size": 128, 00:18:18.769 "large_cache_size": 16, 00:18:18.769 "task_count": 2048, 00:18:18.769 "sequence_count": 2048, 00:18:18.769 "buf_count": 2048 00:18:18.769 } 00:18:18.769 } 00:18:18.769 ] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "bdev", 00:18:18.769 "config": [ 00:18:18.769 { 00:18:18.769 "method": "bdev_set_options", 00:18:18.769 "params": { 00:18:18.769 "bdev_io_pool_size": 65535, 00:18:18.769 "bdev_io_cache_size": 256, 00:18:18.769 "bdev_auto_examine": true, 00:18:18.769 "iobuf_small_cache_size": 128, 00:18:18.769 "iobuf_large_cache_size": 16 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "bdev_raid_set_options", 00:18:18.769 "params": { 00:18:18.769 "process_window_size_kb": 1024 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "bdev_iscsi_set_options", 00:18:18.769 "params": { 00:18:18.769 "timeout_sec": 30 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "bdev_nvme_set_options", 00:18:18.769 "params": { 00:18:18.769 "action_on_timeout": "none", 00:18:18.769 "timeout_us": 0, 00:18:18.769 "timeout_admin_us": 0, 00:18:18.769 "keep_alive_timeout_ms": 10000, 00:18:18.769 "arbitration_burst": 0, 00:18:18.769 "low_priority_weight": 0, 00:18:18.769 "medium_priority_weight": 0, 00:18:18.769 "high_priority_weight": 0, 00:18:18.769 "nvme_adminq_poll_period_us": 10000, 00:18:18.769 "nvme_ioq_poll_period_us": 0, 00:18:18.769 "io_queue_requests": 0, 00:18:18.769 "delay_cmd_submit": true, 00:18:18.769 "transport_retry_count": 4, 00:18:18.769 "bdev_retry_count": 3, 00:18:18.769 "transport_ack_timeout": 0, 00:18:18.769 "ctrlr_loss_timeout_sec": 0, 00:18:18.769 "reconnect_delay_sec": 0, 00:18:18.769 "fast_io_fail_timeout_sec": 0, 00:18:18.769 "disable_auto_failback": false, 00:18:18.769 "generate_uuids": false, 00:18:18.769 "transport_tos": 0, 00:18:18.769 "nvme_error_stat": false, 00:18:18.769 "rdma_srq_size": 0, 00:18:18.769 "io_path_stat": false, 00:18:18.769 "allow_accel_sequence": false, 00:18:18.769 "rdma_max_cq_size": 0, 00:18:18.769 "rdma_cm_event_timeout_ms": 0, 00:18:18.769 "dhchap_digests": [ 00:18:18.769 "sha256", 00:18:18.769 "sha384", 00:18:18.769 "sha512" 00:18:18.769 ], 00:18:18.769 "dhchap_dhgroups": [ 00:18:18.769 "null", 00:18:18.769 "ffdhe2048", 00:18:18.769 "ffdhe3072", 00:18:18.769 "ffdhe4096", 00:18:18.769 "ffdhe6144", 00:18:18.769 "ffdhe8192" 00:18:18.769 ] 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "bdev_nvme_set_hotplug", 00:18:18.769 "params": { 00:18:18.769 "period_us": 100000, 00:18:18.769 "enable": false 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "bdev_malloc_create", 00:18:18.769 "params": { 00:18:18.769 "name": "malloc0", 00:18:18.769 "num_blocks": 8192, 00:18:18.769 "block_size": 4096, 00:18:18.769 "physical_block_size": 4096, 00:18:18.769 "uuid": "11cc279e-b6b7-468a-91d4-9f9f34b3f8cd", 00:18:18.769 "optimal_io_boundary": 0 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "bdev_wait_for_examine" 00:18:18.769 } 00:18:18.769 ] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "nbd", 00:18:18.769 "config": [] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "scheduler", 00:18:18.769 "config": [ 00:18:18.769 { 00:18:18.769 "method": "framework_set_scheduler", 00:18:18.769 "params": { 00:18:18.769 "name": "static" 00:18:18.769 } 00:18:18.769 } 00:18:18.769 ] 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "subsystem": "nvmf", 00:18:18.769 "config": [ 00:18:18.769 { 00:18:18.769 "method": "nvmf_set_config", 00:18:18.769 "params": { 00:18:18.769 "discovery_filter": "match_any", 00:18:18.769 "admin_cmd_passthru": { 00:18:18.769 "identify_ctrlr": false 00:18:18.769 } 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "nvmf_set_max_subsystems", 00:18:18.769 "params": { 00:18:18.769 "max_subsystems": 1024 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "nvmf_set_crdt", 00:18:18.769 "params": { 00:18:18.769 "crdt1": 0, 00:18:18.769 "crdt2": 0, 00:18:18.769 "crdt3": 0 00:18:18.769 } 00:18:18.769 }, 00:18:18.769 { 00:18:18.769 "method": "nvmf_create_transport", 00:18:18.769 "params": { 00:18:18.769 "trtype": "TCP", 00:18:18.769 "max_queue_depth": 128, 00:18:18.769 "max_io_qpairs_per_ctrlr": 127, 00:18:18.769 "in_capsule_data_size": 4096, 00:18:18.769 "max_io_size": 131072, 00:18:18.769 "io_unit_size": 131072, 00:18:18.769 "max_aq_depth": 128, 00:18:18.769 "num_shared_buffers": 511, 00:18:18.769 "buf_cache_size": 4294967295, 00:18:18.769 "dif_insert_or_strip": false, 00:18:18.769 "zcopy": false, 00:18:18.769 "c2h_success": false, 00:18:18.769 "sock_priority": 0, 00:18:18.770 "abort_timeout_sec": 1, 00:18:18.770 "ack_timeout": 0, 00:18:18.770 "data_wr_pool_size": 0 00:18:18.770 } 00:18:18.770 }, 00:18:18.770 { 00:18:18.770 "method": "nvmf_create_subsystem", 00:18:18.770 "params": { 00:18:18.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.770 "allow_any_host": false, 00:18:18.770 "serial_number": "SPDK00000000000001", 00:18:18.770 "model_number": "SPDK bdev Controller", 00:18:18.770 "max_namespaces": 10, 00:18:18.770 "min_cntlid": 1, 00:18:18.770 "max_cntlid": 65519, 00:18:18.770 "ana_reporting": false 00:18:18.770 } 00:18:18.770 }, 00:18:18.770 { 00:18:18.770 "method": "nvmf_subsystem_add_host", 00:18:18.770 "params": { 00:18:18.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.770 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.770 "psk": "/tmp/tmp.1w7s70vSTv" 00:18:18.770 } 00:18:18.770 }, 00:18:18.770 { 00:18:18.770 "method": "nvmf_subsystem_add_ns", 00:18:18.770 "params": { 00:18:18.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.770 "namespace": { 00:18:18.770 "nsid": 1, 00:18:18.770 "bdev_name": "malloc0", 00:18:18.770 "nguid": "11CC279EB6B7468A91D49F9F34B3F8CD", 00:18:18.770 "uuid": "11cc279e-b6b7-468a-91d4-9f9f34b3f8cd", 00:18:18.770 "no_auto_visible": false 00:18:18.770 } 00:18:18.770 } 00:18:18.770 }, 00:18:18.770 { 00:18:18.770 "method": "nvmf_subsystem_add_listener", 00:18:18.770 "params": { 00:18:18.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.770 "listen_address": { 00:18:18.770 "trtype": "TCP", 00:18:18.770 "adrfam": "IPv4", 00:18:18.770 "traddr": "10.0.0.2", 00:18:18.770 "trsvcid": "4420" 00:18:18.770 }, 00:18:18.770 "secure_channel": true 00:18:18.770 } 00:18:18.770 } 00:18:18.770 ] 00:18:18.770 } 00:18:18.770 ] 00:18:18.770 }' 00:18:18.770 11:04:15 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:19.038 11:04:15 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:19.038 "subsystems": [ 00:18:19.038 { 00:18:19.038 "subsystem": "keyring", 00:18:19.038 "config": [] 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "subsystem": "iobuf", 00:18:19.038 "config": [ 00:18:19.038 { 00:18:19.038 "method": "iobuf_set_options", 00:18:19.038 "params": { 00:18:19.038 "small_pool_count": 8192, 00:18:19.038 "large_pool_count": 1024, 00:18:19.038 "small_bufsize": 8192, 00:18:19.038 "large_bufsize": 135168 00:18:19.038 } 00:18:19.038 } 00:18:19.038 ] 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "subsystem": "sock", 00:18:19.038 "config": [ 00:18:19.038 { 00:18:19.038 "method": "sock_impl_set_options", 00:18:19.038 "params": { 00:18:19.038 "impl_name": "posix", 00:18:19.038 "recv_buf_size": 2097152, 00:18:19.038 "send_buf_size": 2097152, 00:18:19.038 "enable_recv_pipe": true, 00:18:19.038 "enable_quickack": false, 00:18:19.038 "enable_placement_id": 0, 00:18:19.038 "enable_zerocopy_send_server": true, 00:18:19.038 "enable_zerocopy_send_client": false, 00:18:19.038 "zerocopy_threshold": 0, 00:18:19.038 "tls_version": 0, 00:18:19.038 "enable_ktls": false 00:18:19.038 } 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "method": "sock_impl_set_options", 00:18:19.038 "params": { 00:18:19.038 "impl_name": "ssl", 00:18:19.038 "recv_buf_size": 4096, 00:18:19.038 "send_buf_size": 4096, 00:18:19.038 "enable_recv_pipe": true, 00:18:19.038 "enable_quickack": false, 00:18:19.038 "enable_placement_id": 0, 00:18:19.038 "enable_zerocopy_send_server": true, 00:18:19.038 "enable_zerocopy_send_client": false, 00:18:19.038 "zerocopy_threshold": 0, 00:18:19.038 "tls_version": 0, 00:18:19.038 "enable_ktls": false 00:18:19.038 } 00:18:19.038 } 00:18:19.038 ] 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "subsystem": "vmd", 00:18:19.038 "config": [] 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "subsystem": "accel", 00:18:19.038 "config": [ 00:18:19.038 { 00:18:19.038 "method": "accel_set_options", 00:18:19.038 "params": { 00:18:19.038 "small_cache_size": 128, 00:18:19.038 "large_cache_size": 16, 00:18:19.038 "task_count": 2048, 00:18:19.038 "sequence_count": 2048, 00:18:19.038 "buf_count": 2048 00:18:19.038 } 00:18:19.038 } 00:18:19.038 ] 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "subsystem": "bdev", 00:18:19.038 "config": [ 00:18:19.038 { 00:18:19.038 "method": "bdev_set_options", 00:18:19.038 "params": { 00:18:19.038 "bdev_io_pool_size": 65535, 00:18:19.038 "bdev_io_cache_size": 256, 00:18:19.038 "bdev_auto_examine": true, 00:18:19.038 "iobuf_small_cache_size": 128, 00:18:19.038 "iobuf_large_cache_size": 16 00:18:19.038 } 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "method": "bdev_raid_set_options", 00:18:19.038 "params": { 00:18:19.038 "process_window_size_kb": 1024 00:18:19.038 } 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "method": "bdev_iscsi_set_options", 00:18:19.038 "params": { 00:18:19.038 "timeout_sec": 30 00:18:19.038 } 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "method": "bdev_nvme_set_options", 00:18:19.038 "params": { 00:18:19.038 "action_on_timeout": "none", 00:18:19.038 "timeout_us": 0, 00:18:19.038 "timeout_admin_us": 0, 00:18:19.038 "keep_alive_timeout_ms": 10000, 00:18:19.038 "arbitration_burst": 0, 00:18:19.038 "low_priority_weight": 0, 00:18:19.038 "medium_priority_weight": 0, 00:18:19.038 "high_priority_weight": 0, 00:18:19.038 "nvme_adminq_poll_period_us": 10000, 00:18:19.038 "nvme_ioq_poll_period_us": 0, 00:18:19.038 "io_queue_requests": 512, 00:18:19.038 "delay_cmd_submit": true, 00:18:19.038 "transport_retry_count": 4, 00:18:19.038 "bdev_retry_count": 3, 00:18:19.038 "transport_ack_timeout": 0, 00:18:19.038 "ctrlr_loss_timeout_sec": 0, 00:18:19.038 "reconnect_delay_sec": 0, 00:18:19.038 "fast_io_fail_timeout_sec": 0, 00:18:19.038 "disable_auto_failback": false, 00:18:19.038 "generate_uuids": false, 00:18:19.038 "transport_tos": 0, 00:18:19.038 "nvme_error_stat": false, 00:18:19.038 "rdma_srq_size": 0, 00:18:19.038 "io_path_stat": false, 00:18:19.038 "allow_accel_sequence": false, 00:18:19.038 "rdma_max_cq_size": 0, 00:18:19.038 "rdma_cm_event_timeout_ms": 0, 00:18:19.038 "dhchap_digests": [ 00:18:19.038 "sha256", 00:18:19.038 "sha384", 00:18:19.038 "sha512" 00:18:19.038 ], 00:18:19.038 "dhchap_dhgroups": [ 00:18:19.038 "null", 00:18:19.038 "ffdhe2048", 00:18:19.038 "ffdhe3072", 00:18:19.038 "ffdhe4096", 00:18:19.038 "ffdhe6144", 00:18:19.038 "ffdhe8192" 00:18:19.038 ] 00:18:19.038 } 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "method": "bdev_nvme_attach_controller", 00:18:19.038 "params": { 00:18:19.038 "name": "TLSTEST", 00:18:19.038 "trtype": "TCP", 00:18:19.038 "adrfam": "IPv4", 00:18:19.038 "traddr": "10.0.0.2", 00:18:19.038 "trsvcid": "4420", 00:18:19.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.038 "prchk_reftag": false, 00:18:19.038 "prchk_guard": false, 00:18:19.038 "ctrlr_loss_timeout_sec": 0, 00:18:19.038 "reconnect_delay_sec": 0, 00:18:19.038 "fast_io_fail_timeout_sec": 0, 00:18:19.038 "psk": "/tmp/tmp.1w7s70vSTv", 00:18:19.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.038 "hdgst": false, 00:18:19.038 "ddgst": false 00:18:19.038 } 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "method": "bdev_nvme_set_hotplug", 00:18:19.038 "params": { 00:18:19.038 "period_us": 100000, 00:18:19.038 "enable": false 00:18:19.038 } 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "method": "bdev_wait_for_examine" 00:18:19.038 } 00:18:19.038 ] 00:18:19.038 }, 00:18:19.038 { 00:18:19.038 "subsystem": "nbd", 00:18:19.038 "config": [] 00:18:19.038 } 00:18:19.038 ] 00:18:19.038 }' 00:18:19.038 11:04:15 -- target/tls.sh@199 -- # killprocess 357429 00:18:19.038 11:04:15 -- common/autotest_common.sh@946 -- # '[' -z 357429 ']' 00:18:19.038 11:04:15 -- common/autotest_common.sh@950 -- # kill -0 357429 00:18:19.038 11:04:15 -- common/autotest_common.sh@951 -- # uname 00:18:19.038 11:04:15 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:19.038 11:04:15 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 357429 00:18:19.038 11:04:15 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:19.038 11:04:15 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:19.038 11:04:15 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 357429' 00:18:19.038 killing process with pid 357429 00:18:19.038 11:04:15 -- common/autotest_common.sh@965 -- # kill 357429 00:18:19.038 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.038 00:18:19.038 Latency(us) 00:18:19.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.038 =================================================================================================================== 00:18:19.038 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.038 [2024-05-15 11:04:15.551742] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:19.038 11:04:15 -- common/autotest_common.sh@970 -- # wait 357429 00:18:19.038 11:04:15 -- target/tls.sh@200 -- # killprocess 357064 00:18:19.038 11:04:15 -- common/autotest_common.sh@946 -- # '[' -z 357064 ']' 00:18:19.038 11:04:15 -- common/autotest_common.sh@950 -- # kill -0 357064 00:18:19.038 11:04:15 -- common/autotest_common.sh@951 -- # uname 00:18:19.038 11:04:15 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:19.038 11:04:15 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 357064 00:18:19.304 11:04:15 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:19.304 11:04:15 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:19.304 11:04:15 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 357064' 00:18:19.304 killing process with pid 357064 00:18:19.304 11:04:15 -- common/autotest_common.sh@965 -- # kill 357064 00:18:19.304 [2024-05-15 11:04:15.718588] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:19.304 [2024-05-15 11:04:15.718618] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:19.304 11:04:15 -- common/autotest_common.sh@970 -- # wait 357064 00:18:19.304 11:04:15 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:19.304 11:04:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:19.304 11:04:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:19.304 11:04:15 -- common/autotest_common.sh@10 -- # set +x 00:18:19.304 11:04:15 -- target/tls.sh@203 -- # echo '{ 00:18:19.304 "subsystems": [ 00:18:19.304 { 00:18:19.304 "subsystem": "keyring", 00:18:19.304 "config": [] 00:18:19.304 }, 00:18:19.304 { 00:18:19.304 "subsystem": "iobuf", 00:18:19.304 "config": [ 00:18:19.304 { 00:18:19.304 "method": "iobuf_set_options", 00:18:19.304 "params": { 00:18:19.304 "small_pool_count": 8192, 00:18:19.304 "large_pool_count": 1024, 00:18:19.304 "small_bufsize": 8192, 00:18:19.304 "large_bufsize": 135168 00:18:19.304 } 00:18:19.304 } 00:18:19.304 ] 00:18:19.304 }, 00:18:19.304 { 00:18:19.304 "subsystem": "sock", 00:18:19.304 "config": [ 00:18:19.304 { 00:18:19.304 "method": "sock_impl_set_options", 00:18:19.304 "params": { 00:18:19.305 "impl_name": "posix", 00:18:19.305 "recv_buf_size": 2097152, 00:18:19.305 "send_buf_size": 2097152, 00:18:19.305 "enable_recv_pipe": true, 00:18:19.305 "enable_quickack": false, 00:18:19.305 "enable_placement_id": 0, 00:18:19.305 "enable_zerocopy_send_server": true, 00:18:19.305 "enable_zerocopy_send_client": false, 00:18:19.305 "zerocopy_threshold": 0, 00:18:19.305 "tls_version": 0, 00:18:19.305 "enable_ktls": false 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "sock_impl_set_options", 00:18:19.305 "params": { 00:18:19.305 "impl_name": "ssl", 00:18:19.305 "recv_buf_size": 4096, 00:18:19.305 "send_buf_size": 4096, 00:18:19.305 "enable_recv_pipe": true, 00:18:19.305 "enable_quickack": false, 00:18:19.305 "enable_placement_id": 0, 00:18:19.305 "enable_zerocopy_send_server": true, 00:18:19.305 "enable_zerocopy_send_client": false, 00:18:19.305 "zerocopy_threshold": 0, 00:18:19.305 "tls_version": 0, 00:18:19.305 "enable_ktls": false 00:18:19.305 } 00:18:19.305 } 00:18:19.305 ] 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "subsystem": "vmd", 00:18:19.305 "config": [] 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "subsystem": "accel", 00:18:19.305 "config": [ 00:18:19.305 { 00:18:19.305 "method": "accel_set_options", 00:18:19.305 "params": { 00:18:19.305 "small_cache_size": 128, 00:18:19.305 "large_cache_size": 16, 00:18:19.305 "task_count": 2048, 00:18:19.305 "sequence_count": 2048, 00:18:19.305 "buf_count": 2048 00:18:19.305 } 00:18:19.305 } 00:18:19.305 ] 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "subsystem": "bdev", 00:18:19.305 "config": [ 00:18:19.305 { 00:18:19.305 "method": "bdev_set_options", 00:18:19.305 "params": { 00:18:19.305 "bdev_io_pool_size": 65535, 00:18:19.305 "bdev_io_cache_size": 256, 00:18:19.305 "bdev_auto_examine": true, 00:18:19.305 "iobuf_small_cache_size": 128, 00:18:19.305 "iobuf_large_cache_size": 16 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "bdev_raid_set_options", 00:18:19.305 "params": { 00:18:19.305 "process_window_size_kb": 1024 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "bdev_iscsi_set_options", 00:18:19.305 "params": { 00:18:19.305 "timeout_sec": 30 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "bdev_nvme_set_options", 00:18:19.305 "params": { 00:18:19.305 "action_on_timeout": "none", 00:18:19.305 "timeout_us": 0, 00:18:19.305 "timeout_admin_us": 0, 00:18:19.305 "keep_alive_timeout_ms": 10000, 00:18:19.305 "arbitration_burst": 0, 00:18:19.305 "low_priority_weight": 0, 00:18:19.305 "medium_priority_weight": 0, 00:18:19.305 "high_priority_weight": 0, 00:18:19.305 "nvme_adminq_poll_period_us": 10000, 00:18:19.305 "nvme_ioq_poll_period_us": 0, 00:18:19.305 "io_queue_requests": 0, 00:18:19.305 "delay_cmd_submit": true, 00:18:19.305 "transport_retry_count": 4, 00:18:19.305 "bdev_retry_count": 3, 00:18:19.305 "transport_ack_timeout": 0, 00:18:19.305 "ctrlr_loss_timeout_sec": 0, 00:18:19.305 "reconnect_delay_sec": 0, 00:18:19.305 "fast_io_fail_timeout_sec": 0, 00:18:19.305 "disable_auto_failback": false, 00:18:19.305 "generate_uuids": false, 00:18:19.305 "transport_tos": 0, 00:18:19.305 "nvme_error_stat": false, 00:18:19.305 "rdma_srq_size": 0, 00:18:19.305 "io_path_stat": false, 00:18:19.305 "allow_accel_sequence": false, 00:18:19.305 "rdma_max_cq_size": 0, 00:18:19.305 "rdma_cm_event_timeout_ms": 0, 00:18:19.305 "dhchap_digests": [ 00:18:19.305 "sha256", 00:18:19.305 "sha384", 00:18:19.305 "sha512" 00:18:19.305 ], 00:18:19.305 "dhchap_dhgroups": [ 00:18:19.305 "null", 00:18:19.305 "ffdhe2048", 00:18:19.305 "ffdhe3072", 00:18:19.305 "ffdhe4096", 00:18:19.305 "ffdhe6144", 00:18:19.305 "ffdhe8192" 00:18:19.305 ] 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "bdev_nvme_set_hotplug", 00:18:19.305 "params": { 00:18:19.305 "period_us": 100000, 00:18:19.305 "enable": false 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "bdev_malloc_create", 00:18:19.305 "params": { 00:18:19.305 "name": "malloc0", 00:18:19.305 "num_blocks": 8192, 00:18:19.305 "block_size": 4096, 00:18:19.305 "physical_block_size": 4096, 00:18:19.305 "uuid": "11cc279e-b6b7-468a-91d4-9f9f34b3f8cd", 00:18:19.305 "optimal_io_boundary": 0 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "bdev_wait_for_examine" 00:18:19.305 } 00:18:19.305 ] 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "subsystem": "nbd", 00:18:19.305 "config": [] 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "subsystem": "scheduler", 00:18:19.305 "config": [ 00:18:19.305 { 00:18:19.305 "method": "framework_set_scheduler", 00:18:19.305 "params": { 00:18:19.305 "name": "static" 00:18:19.305 } 00:18:19.305 } 00:18:19.305 ] 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "subsystem": "nvmf", 00:18:19.305 "config": [ 00:18:19.305 { 00:18:19.305 "method": "nvmf_set_config", 00:18:19.305 "params": { 00:18:19.305 "discovery_filter": "match_any", 00:18:19.305 "admin_cmd_passthru": { 00:18:19.305 "identify_ctrlr": false 00:18:19.305 } 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "nvmf_set_max_subsystems", 00:18:19.305 "params": { 00:18:19.305 "max_subsystems": 1024 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "nvmf_set_crdt", 00:18:19.305 "params": { 00:18:19.305 "crdt1": 0, 00:18:19.305 "crdt2": 0, 00:18:19.305 "crdt3": 0 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "nvmf_create_transport", 00:18:19.305 "params": { 00:18:19.305 "trtype": "TCP", 00:18:19.305 "max_queue_depth": 128, 00:18:19.305 "max_io_qpairs_per_ctrlr": 127, 00:18:19.305 "in_capsule_data_size": 4096, 00:18:19.305 "max_io_size": 131072, 00:18:19.305 "io_unit_size": 131072, 00:18:19.305 "max_aq_depth": 128, 00:18:19.305 "num_shared_buffers": 511, 00:18:19.305 "buf_cache_size": 4294967295, 00:18:19.305 "dif_insert_or_strip": false, 00:18:19.305 "zcopy": false, 00:18:19.305 "c2h_success": false, 00:18:19.305 "sock_priority": 0, 00:18:19.305 "abort_timeout_sec": 1, 00:18:19.305 "ack_timeout": 0, 00:18:19.305 "data_wr_pool_size": 0 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "nvmf_create_subsystem", 00:18:19.305 "params": { 00:18:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.305 "allow_any_host": false, 00:18:19.305 "serial_number": "SPDK00000000000001", 00:18:19.305 "model_number": "SPDK bdev Controller", 00:18:19.305 "max_namespaces": 10, 00:18:19.305 "min_cntlid": 1, 00:18:19.305 "max_cntlid": 65519, 00:18:19.305 "ana_reporting": false 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "nvmf_subsystem_add_host", 00:18:19.305 "params": { 00:18:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.305 "host": "nqn.2016-06.io.spdk:host1", 00:18:19.305 "psk": "/tmp/tmp.1w7s70vSTv" 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "nvmf_subsystem_add_ns", 00:18:19.305 "params": { 00:18:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.305 "namespace": { 00:18:19.305 "nsid": 1, 00:18:19.305 "bdev_name": "malloc0", 00:18:19.305 "nguid": "11CC279EB6B7468A91D49F9F34B3F8CD", 00:18:19.305 "uuid": "11cc279e-b6b7-468a-91d4-9f9f34b3f8cd", 00:18:19.305 "no_auto_visible": false 00:18:19.305 } 00:18:19.305 } 00:18:19.305 }, 00:18:19.305 { 00:18:19.305 "method": "nvmf_subsystem_add_listener", 00:18:19.305 "params": { 00:18:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.306 "listen_address": { 00:18:19.306 "trtype": "TCP", 00:18:19.306 "adrfam": "IPv4", 00:18:19.306 "traddr": "10.0.0.2", 00:18:19.306 "trsvcid": "4420" 00:18:19.306 }, 00:18:19.306 "secure_channel": true 00:18:19.306 } 00:18:19.306 } 00:18:19.306 ] 00:18:19.306 } 00:18:19.306 ] 00:18:19.306 }' 00:18:19.306 11:04:15 -- nvmf/common.sh@470 -- # nvmfpid=357782 00:18:19.306 11:04:15 -- nvmf/common.sh@471 -- # waitforlisten 357782 00:18:19.306 11:04:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:19.306 11:04:15 -- common/autotest_common.sh@827 -- # '[' -z 357782 ']' 00:18:19.306 11:04:15 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.306 11:04:15 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:19.306 11:04:15 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.306 11:04:15 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:19.306 11:04:15 -- common/autotest_common.sh@10 -- # set +x 00:18:19.306 [2024-05-15 11:04:15.903619] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:19.306 [2024-05-15 11:04:15.903700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.306 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.581 [2024-05-15 11:04:15.984036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.581 [2024-05-15 11:04:16.037030] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.581 [2024-05-15 11:04:16.037061] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.581 [2024-05-15 11:04:16.037066] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.581 [2024-05-15 11:04:16.037071] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.581 [2024-05-15 11:04:16.037075] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.581 [2024-05-15 11:04:16.037121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.581 [2024-05-15 11:04:16.212183] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.868 [2024-05-15 11:04:16.228160] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:19.868 [2024-05-15 11:04:16.244191] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:19.868 [2024-05-15 11:04:16.244226] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.868 [2024-05-15 11:04:16.256846] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.139 11:04:16 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:20.139 11:04:16 -- common/autotest_common.sh@860 -- # return 0 00:18:20.139 11:04:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:20.139 11:04:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.139 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:20.139 11:04:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.139 11:04:16 -- target/tls.sh@207 -- # bdevperf_pid=357820 00:18:20.139 11:04:16 -- target/tls.sh@208 -- # waitforlisten 357820 /var/tmp/bdevperf.sock 00:18:20.139 11:04:16 -- common/autotest_common.sh@827 -- # '[' -z 357820 ']' 00:18:20.139 11:04:16 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.139 11:04:16 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:20.139 11:04:16 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.139 11:04:16 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:20.139 11:04:16 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:20.139 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:18:20.139 11:04:16 -- target/tls.sh@204 -- # echo '{ 00:18:20.139 "subsystems": [ 00:18:20.139 { 00:18:20.139 "subsystem": "keyring", 00:18:20.139 "config": [] 00:18:20.139 }, 00:18:20.139 { 00:18:20.139 "subsystem": "iobuf", 00:18:20.139 "config": [ 00:18:20.139 { 00:18:20.139 "method": "iobuf_set_options", 00:18:20.139 "params": { 00:18:20.139 "small_pool_count": 8192, 00:18:20.139 "large_pool_count": 1024, 00:18:20.139 "small_bufsize": 8192, 00:18:20.139 "large_bufsize": 135168 00:18:20.139 } 00:18:20.139 } 00:18:20.139 ] 00:18:20.139 }, 00:18:20.139 { 00:18:20.139 "subsystem": "sock", 00:18:20.139 "config": [ 00:18:20.139 { 00:18:20.139 "method": "sock_impl_set_options", 00:18:20.139 "params": { 00:18:20.139 "impl_name": "posix", 00:18:20.139 "recv_buf_size": 2097152, 00:18:20.139 "send_buf_size": 2097152, 00:18:20.139 "enable_recv_pipe": true, 00:18:20.139 "enable_quickack": false, 00:18:20.139 "enable_placement_id": 0, 00:18:20.139 "enable_zerocopy_send_server": true, 00:18:20.139 "enable_zerocopy_send_client": false, 00:18:20.139 "zerocopy_threshold": 0, 00:18:20.139 "tls_version": 0, 00:18:20.139 "enable_ktls": false 00:18:20.139 } 00:18:20.139 }, 00:18:20.139 { 00:18:20.139 "method": "sock_impl_set_options", 00:18:20.139 "params": { 00:18:20.139 "impl_name": "ssl", 00:18:20.139 "recv_buf_size": 4096, 00:18:20.139 "send_buf_size": 4096, 00:18:20.139 "enable_recv_pipe": true, 00:18:20.140 "enable_quickack": false, 00:18:20.140 "enable_placement_id": 0, 00:18:20.140 "enable_zerocopy_send_server": true, 00:18:20.140 "enable_zerocopy_send_client": false, 00:18:20.140 "zerocopy_threshold": 0, 00:18:20.140 "tls_version": 0, 00:18:20.140 "enable_ktls": false 00:18:20.140 } 00:18:20.140 } 00:18:20.140 ] 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "subsystem": "vmd", 00:18:20.140 "config": [] 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "subsystem": "accel", 00:18:20.140 "config": [ 00:18:20.140 { 00:18:20.140 "method": "accel_set_options", 00:18:20.140 "params": { 00:18:20.140 "small_cache_size": 128, 00:18:20.140 "large_cache_size": 16, 00:18:20.140 "task_count": 2048, 00:18:20.140 "sequence_count": 2048, 00:18:20.140 "buf_count": 2048 00:18:20.140 } 00:18:20.140 } 00:18:20.140 ] 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "subsystem": "bdev", 00:18:20.140 "config": [ 00:18:20.140 { 00:18:20.140 "method": "bdev_set_options", 00:18:20.140 "params": { 00:18:20.140 "bdev_io_pool_size": 65535, 00:18:20.140 "bdev_io_cache_size": 256, 00:18:20.140 "bdev_auto_examine": true, 00:18:20.140 "iobuf_small_cache_size": 128, 00:18:20.140 "iobuf_large_cache_size": 16 00:18:20.140 } 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "method": "bdev_raid_set_options", 00:18:20.140 "params": { 00:18:20.140 "process_window_size_kb": 1024 00:18:20.140 } 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "method": "bdev_iscsi_set_options", 00:18:20.140 "params": { 00:18:20.140 "timeout_sec": 30 00:18:20.140 } 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "method": "bdev_nvme_set_options", 00:18:20.140 "params": { 00:18:20.140 "action_on_timeout": "none", 00:18:20.140 "timeout_us": 0, 00:18:20.140 "timeout_admin_us": 0, 00:18:20.140 "keep_alive_timeout_ms": 10000, 00:18:20.140 "arbitration_burst": 0, 00:18:20.140 "low_priority_weight": 0, 00:18:20.140 "medium_priority_weight": 0, 00:18:20.140 "high_priority_weight": 0, 00:18:20.140 "nvme_adminq_poll_period_us": 10000, 00:18:20.140 "nvme_ioq_poll_period_us": 0, 00:18:20.140 "io_queue_requests": 512, 00:18:20.140 "delay_cmd_submit": true, 00:18:20.140 "transport_retry_count": 4, 00:18:20.140 "bdev_retry_count": 3, 00:18:20.140 "transport_ack_timeout": 0, 00:18:20.140 "ctrlr_loss_timeout_sec": 0, 00:18:20.140 "reconnect_delay_sec": 0, 00:18:20.140 "fast_io_fail_timeout_sec": 0, 00:18:20.140 "disable_auto_failback": false, 00:18:20.140 "generate_uuids": false, 00:18:20.140 "transport_tos": 0, 00:18:20.140 "nvme_error_stat": false, 00:18:20.140 "rdma_srq_size": 0, 00:18:20.140 "io_path_stat": false, 00:18:20.140 "allow_accel_sequence": false, 00:18:20.140 "rdma_max_cq_size": 0, 00:18:20.140 "rdma_cm_event_timeout_ms": 0, 00:18:20.140 "dhchap_digests": [ 00:18:20.140 "sha256", 00:18:20.140 "sha384", 00:18:20.140 "sha512" 00:18:20.140 ], 00:18:20.140 "dhchap_dhgroups": [ 00:18:20.140 "null", 00:18:20.140 "ffdhe2048", 00:18:20.140 "ffdhe3072", 00:18:20.140 "ffdhe4096", 00:18:20.140 "ffdhe6144", 00:18:20.140 "ffdhe8192" 00:18:20.140 ] 00:18:20.140 } 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "method": "bdev_nvme_attach_controller", 00:18:20.140 "params": { 00:18:20.140 "name": "TLSTEST", 00:18:20.140 "trtype": "TCP", 00:18:20.140 "adrfam": "IPv4", 00:18:20.140 "traddr": "10.0.0.2", 00:18:20.140 "trsvcid": "4420", 00:18:20.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.140 "prchk_reftag": false, 00:18:20.140 "prchk_guard": false, 00:18:20.140 "ctrlr_loss_timeout_sec": 0, 00:18:20.140 "reconnect_delay_sec": 0, 00:18:20.140 "fast_io_fail_timeout_sec": 0, 00:18:20.140 "psk": "/tmp/tmp.1w7s70vSTv", 00:18:20.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.140 "hdgst": false, 00:18:20.140 "ddgst": false 00:18:20.140 } 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "method": "bdev_nvme_set_hotplug", 00:18:20.140 "params": { 00:18:20.140 "period_us": 100000, 00:18:20.140 "enable": false 00:18:20.140 } 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "method": "bdev_wait_for_examine" 00:18:20.140 } 00:18:20.140 ] 00:18:20.140 }, 00:18:20.140 { 00:18:20.140 "subsystem": "nbd", 00:18:20.140 "config": [] 00:18:20.140 } 00:18:20.140 ] 00:18:20.140 }' 00:18:20.140 [2024-05-15 11:04:16.742324] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:20.140 [2024-05-15 11:04:16.742376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357820 ] 00:18:20.140 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.415 [2024-05-15 11:04:16.802905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.415 [2024-05-15 11:04:16.868857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.415 [2024-05-15 11:04:16.985496] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.415 [2024-05-15 11:04:16.985568] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:21.013 11:04:17 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:21.013 11:04:17 -- common/autotest_common.sh@860 -- # return 0 00:18:21.013 11:04:17 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:21.013 Running I/O for 10 seconds... 00:18:31.272 00:18:31.272 Latency(us) 00:18:31.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.273 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.273 Verification LBA range: start 0x0 length 0x2000 00:18:31.273 TLSTESTn1 : 10.08 2755.76 10.76 0.00 0.00 46291.11 4669.44 82575.36 00:18:31.273 =================================================================================================================== 00:18:31.273 Total : 2755.76 10.76 0.00 0.00 46291.11 4669.44 82575.36 00:18:31.273 0 00:18:31.273 11:04:27 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.273 11:04:27 -- target/tls.sh@214 -- # killprocess 357820 00:18:31.273 11:04:27 -- common/autotest_common.sh@946 -- # '[' -z 357820 ']' 00:18:31.273 11:04:27 -- common/autotest_common.sh@950 -- # kill -0 357820 00:18:31.273 11:04:27 -- common/autotest_common.sh@951 -- # uname 00:18:31.273 11:04:27 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.273 11:04:27 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 357820 00:18:31.273 11:04:27 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:31.273 11:04:27 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:31.273 11:04:27 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 357820' 00:18:31.273 killing process with pid 357820 00:18:31.273 11:04:27 -- common/autotest_common.sh@965 -- # kill 357820 00:18:31.273 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.273 00:18:31.273 Latency(us) 00:18:31.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.273 =================================================================================================================== 00:18:31.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.273 [2024-05-15 11:04:27.782297] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:31.273 11:04:27 -- common/autotest_common.sh@970 -- # wait 357820 00:18:31.273 11:04:27 -- target/tls.sh@215 -- # killprocess 357782 00:18:31.273 11:04:27 -- common/autotest_common.sh@946 -- # '[' -z 357782 ']' 00:18:31.273 11:04:27 -- common/autotest_common.sh@950 -- # kill -0 357782 00:18:31.273 11:04:27 -- common/autotest_common.sh@951 -- # uname 00:18:31.273 11:04:27 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.273 11:04:27 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 357782 00:18:31.565 11:04:27 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:31.565 11:04:27 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:31.565 11:04:27 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 357782' 00:18:31.565 killing process with pid 357782 00:18:31.565 11:04:27 -- common/autotest_common.sh@965 -- # kill 357782 00:18:31.565 [2024-05-15 11:04:27.951162] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:31.565 [2024-05-15 11:04:27.951190] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:31.565 11:04:27 -- common/autotest_common.sh@970 -- # wait 357782 00:18:31.565 11:04:28 -- target/tls.sh@218 -- # nvmfappstart 00:18:31.565 11:04:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:31.565 11:04:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:31.565 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:18:31.565 11:04:28 -- nvmf/common.sh@470 -- # nvmfpid=360173 00:18:31.565 11:04:28 -- nvmf/common.sh@471 -- # waitforlisten 360173 00:18:31.565 11:04:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:31.565 11:04:28 -- common/autotest_common.sh@827 -- # '[' -z 360173 ']' 00:18:31.565 11:04:28 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.565 11:04:28 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.565 11:04:28 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.565 11:04:28 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.565 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:18:31.565 [2024-05-15 11:04:28.126800] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:31.565 [2024-05-15 11:04:28.126853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.565 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.565 [2024-05-15 11:04:28.189887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.844 [2024-05-15 11:04:28.253617] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.844 [2024-05-15 11:04:28.253650] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.844 [2024-05-15 11:04:28.253657] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.844 [2024-05-15 11:04:28.253663] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.844 [2024-05-15 11:04:28.253669] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.844 [2024-05-15 11:04:28.253690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.476 11:04:28 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.476 11:04:28 -- common/autotest_common.sh@860 -- # return 0 00:18:32.476 11:04:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:32.476 11:04:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.476 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 11:04:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.476 11:04:28 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.1w7s70vSTv 00:18:32.476 11:04:28 -- target/tls.sh@49 -- # local key=/tmp/tmp.1w7s70vSTv 00:18:32.476 11:04:28 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:32.476 [2024-05-15 11:04:29.060485] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.476 11:04:29 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:32.759 11:04:29 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:32.759 [2024-05-15 11:04:29.369233] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:32.759 [2024-05-15 11:04:29.369279] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.759 [2024-05-15 11:04:29.369458] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.759 11:04:29 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.036 malloc0 00:18:33.036 11:04:29 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.340 11:04:29 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1w7s70vSTv 00:18:33.340 [2024-05-15 11:04:29.829243] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:33.340 11:04:29 -- target/tls.sh@222 -- # bdevperf_pid=360542 00:18:33.340 11:04:29 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.341 11:04:29 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:33.341 11:04:29 -- target/tls.sh@225 -- # waitforlisten 360542 /var/tmp/bdevperf.sock 00:18:33.341 11:04:29 -- common/autotest_common.sh@827 -- # '[' -z 360542 ']' 00:18:33.341 11:04:29 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.341 11:04:29 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.341 11:04:29 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.341 11:04:29 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.341 11:04:29 -- common/autotest_common.sh@10 -- # set +x 00:18:33.341 [2024-05-15 11:04:29.891672] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:33.341 [2024-05-15 11:04:29.891719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360542 ] 00:18:33.341 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.341 [2024-05-15 11:04:29.966742] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.618 [2024-05-15 11:04:30.022043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.243 11:04:30 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:34.243 11:04:30 -- common/autotest_common.sh@860 -- # return 0 00:18:34.243 11:04:30 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1w7s70vSTv 00:18:34.243 11:04:30 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:34.509 [2024-05-15 11:04:30.930380] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.509 nvme0n1 00:18:34.509 11:04:31 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.509 Running I/O for 1 seconds... 00:18:35.516 00:18:35.516 Latency(us) 00:18:35.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.516 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.516 Verification LBA range: start 0x0 length 0x2000 00:18:35.516 nvme0n1 : 1.01 4714.13 18.41 0.00 0.00 26942.11 4450.99 68594.35 00:18:35.516 =================================================================================================================== 00:18:35.516 Total : 4714.13 18.41 0.00 0.00 26942.11 4450.99 68594.35 00:18:35.516 0 00:18:35.516 11:04:32 -- target/tls.sh@234 -- # killprocess 360542 00:18:35.516 11:04:32 -- common/autotest_common.sh@946 -- # '[' -z 360542 ']' 00:18:35.516 11:04:32 -- common/autotest_common.sh@950 -- # kill -0 360542 00:18:35.516 11:04:32 -- common/autotest_common.sh@951 -- # uname 00:18:35.516 11:04:32 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.785 11:04:32 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360542 00:18:35.785 11:04:32 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:35.785 11:04:32 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:35.785 11:04:32 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360542' 00:18:35.785 killing process with pid 360542 00:18:35.785 11:04:32 -- common/autotest_common.sh@965 -- # kill 360542 00:18:35.785 Received shutdown signal, test time was about 1.000000 seconds 00:18:35.785 00:18:35.785 Latency(us) 00:18:35.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.785 =================================================================================================================== 00:18:35.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.785 11:04:32 -- common/autotest_common.sh@970 -- # wait 360542 00:18:35.785 11:04:32 -- target/tls.sh@235 -- # killprocess 360173 00:18:35.785 11:04:32 -- common/autotest_common.sh@946 -- # '[' -z 360173 ']' 00:18:35.785 11:04:32 -- common/autotest_common.sh@950 -- # kill -0 360173 00:18:35.785 11:04:32 -- common/autotest_common.sh@951 -- # uname 00:18:35.785 11:04:32 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.785 11:04:32 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360173 00:18:35.785 11:04:32 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:35.785 11:04:32 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:35.785 11:04:32 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360173' 00:18:35.785 killing process with pid 360173 00:18:35.785 11:04:32 -- common/autotest_common.sh@965 -- # kill 360173 00:18:35.785 [2024-05-15 11:04:32.376004] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:35.785 [2024-05-15 11:04:32.376040] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:35.785 11:04:32 -- common/autotest_common.sh@970 -- # wait 360173 00:18:36.068 11:04:32 -- target/tls.sh@238 -- # nvmfappstart 00:18:36.068 11:04:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:36.068 11:04:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:36.068 11:04:32 -- common/autotest_common.sh@10 -- # set +x 00:18:36.068 11:04:32 -- nvmf/common.sh@470 -- # nvmfpid=361060 00:18:36.068 11:04:32 -- nvmf/common.sh@471 -- # waitforlisten 361060 00:18:36.068 11:04:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:36.068 11:04:32 -- common/autotest_common.sh@827 -- # '[' -z 361060 ']' 00:18:36.068 11:04:32 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.068 11:04:32 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:36.068 11:04:32 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.068 11:04:32 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:36.068 11:04:32 -- common/autotest_common.sh@10 -- # set +x 00:18:36.068 [2024-05-15 11:04:32.580432] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:36.068 [2024-05-15 11:04:32.580497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.068 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.068 [2024-05-15 11:04:32.644438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.068 [2024-05-15 11:04:32.709689] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.068 [2024-05-15 11:04:32.709722] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.068 [2024-05-15 11:04:32.709730] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.068 [2024-05-15 11:04:32.709736] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.068 [2024-05-15 11:04:32.709742] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.068 [2024-05-15 11:04:32.709758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.685 11:04:33 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:36.950 11:04:33 -- common/autotest_common.sh@860 -- # return 0 00:18:36.950 11:04:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:36.950 11:04:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.950 11:04:33 -- common/autotest_common.sh@10 -- # set +x 00:18:36.950 11:04:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.950 11:04:33 -- target/tls.sh@239 -- # rpc_cmd 00:18:36.950 11:04:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.950 11:04:33 -- common/autotest_common.sh@10 -- # set +x 00:18:36.950 [2024-05-15 11:04:33.376467] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.950 malloc0 00:18:36.950 [2024-05-15 11:04:33.403181] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:36.950 [2024-05-15 11:04:33.403227] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.950 [2024-05-15 11:04:33.403408] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.950 11:04:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.950 11:04:33 -- target/tls.sh@252 -- # bdevperf_pid=361269 00:18:36.950 11:04:33 -- target/tls.sh@254 -- # waitforlisten 361269 /var/tmp/bdevperf.sock 00:18:36.950 11:04:33 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:36.950 11:04:33 -- common/autotest_common.sh@827 -- # '[' -z 361269 ']' 00:18:36.950 11:04:33 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.951 11:04:33 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:36.951 11:04:33 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.951 11:04:33 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:36.951 11:04:33 -- common/autotest_common.sh@10 -- # set +x 00:18:36.951 [2024-05-15 11:04:33.479541] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:36.951 [2024-05-15 11:04:33.479591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361269 ] 00:18:36.951 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.951 [2024-05-15 11:04:33.554883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.211 [2024-05-15 11:04:33.608107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.783 11:04:34 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:37.783 11:04:34 -- common/autotest_common.sh@860 -- # return 0 00:18:37.783 11:04:34 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1w7s70vSTv 00:18:37.783 11:04:34 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:38.042 [2024-05-15 11:04:34.538417] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.042 nvme0n1 00:18:38.042 11:04:34 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.303 Running I/O for 1 seconds... 00:18:39.244 00:18:39.245 Latency(us) 00:18:39.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.245 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.245 Verification LBA range: start 0x0 length 0x2000 00:18:39.245 nvme0n1 : 1.05 4420.71 17.27 0.00 0.00 28380.63 5106.35 97430.19 00:18:39.245 =================================================================================================================== 00:18:39.245 Total : 4420.71 17.27 0.00 0.00 28380.63 5106.35 97430.19 00:18:39.245 0 00:18:39.245 11:04:35 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:39.245 11:04:35 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.245 11:04:35 -- common/autotest_common.sh@10 -- # set +x 00:18:39.505 11:04:35 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.505 11:04:35 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:39.505 "subsystems": [ 00:18:39.505 { 00:18:39.505 "subsystem": "keyring", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "keyring_file_add_key", 00:18:39.505 "params": { 00:18:39.505 "name": "key0", 00:18:39.505 "path": "/tmp/tmp.1w7s70vSTv" 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "iobuf", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "iobuf_set_options", 00:18:39.505 "params": { 00:18:39.505 "small_pool_count": 8192, 00:18:39.505 "large_pool_count": 1024, 00:18:39.505 "small_bufsize": 8192, 00:18:39.505 "large_bufsize": 135168 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "sock", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "sock_impl_set_options", 00:18:39.505 "params": { 00:18:39.505 "impl_name": "posix", 00:18:39.505 "recv_buf_size": 2097152, 00:18:39.505 "send_buf_size": 2097152, 00:18:39.505 "enable_recv_pipe": true, 00:18:39.505 "enable_quickack": false, 00:18:39.505 "enable_placement_id": 0, 00:18:39.505 "enable_zerocopy_send_server": true, 00:18:39.505 "enable_zerocopy_send_client": false, 00:18:39.505 "zerocopy_threshold": 0, 00:18:39.505 "tls_version": 0, 00:18:39.505 "enable_ktls": false 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "sock_impl_set_options", 00:18:39.505 "params": { 00:18:39.505 "impl_name": "ssl", 00:18:39.505 "recv_buf_size": 4096, 00:18:39.505 "send_buf_size": 4096, 00:18:39.505 "enable_recv_pipe": true, 00:18:39.505 "enable_quickack": false, 00:18:39.505 "enable_placement_id": 0, 00:18:39.505 "enable_zerocopy_send_server": true, 00:18:39.505 "enable_zerocopy_send_client": false, 00:18:39.505 "zerocopy_threshold": 0, 00:18:39.505 "tls_version": 0, 00:18:39.505 "enable_ktls": false 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "vmd", 00:18:39.505 "config": [] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "accel", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "accel_set_options", 00:18:39.505 "params": { 00:18:39.505 "small_cache_size": 128, 00:18:39.505 "large_cache_size": 16, 00:18:39.505 "task_count": 2048, 00:18:39.505 "sequence_count": 2048, 00:18:39.505 "buf_count": 2048 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "bdev", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "bdev_set_options", 00:18:39.505 "params": { 00:18:39.505 "bdev_io_pool_size": 65535, 00:18:39.505 "bdev_io_cache_size": 256, 00:18:39.505 "bdev_auto_examine": true, 00:18:39.505 "iobuf_small_cache_size": 128, 00:18:39.505 "iobuf_large_cache_size": 16 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "bdev_raid_set_options", 00:18:39.505 "params": { 00:18:39.505 "process_window_size_kb": 1024 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "bdev_iscsi_set_options", 00:18:39.505 "params": { 00:18:39.505 "timeout_sec": 30 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "bdev_nvme_set_options", 00:18:39.505 "params": { 00:18:39.505 "action_on_timeout": "none", 00:18:39.505 "timeout_us": 0, 00:18:39.505 "timeout_admin_us": 0, 00:18:39.505 "keep_alive_timeout_ms": 10000, 00:18:39.505 "arbitration_burst": 0, 00:18:39.505 "low_priority_weight": 0, 00:18:39.505 "medium_priority_weight": 0, 00:18:39.505 "high_priority_weight": 0, 00:18:39.505 "nvme_adminq_poll_period_us": 10000, 00:18:39.505 "nvme_ioq_poll_period_us": 0, 00:18:39.505 "io_queue_requests": 0, 00:18:39.505 "delay_cmd_submit": true, 00:18:39.505 "transport_retry_count": 4, 00:18:39.505 "bdev_retry_count": 3, 00:18:39.505 "transport_ack_timeout": 0, 00:18:39.505 "ctrlr_loss_timeout_sec": 0, 00:18:39.505 "reconnect_delay_sec": 0, 00:18:39.505 "fast_io_fail_timeout_sec": 0, 00:18:39.505 "disable_auto_failback": false, 00:18:39.505 "generate_uuids": false, 00:18:39.505 "transport_tos": 0, 00:18:39.505 "nvme_error_stat": false, 00:18:39.505 "rdma_srq_size": 0, 00:18:39.505 "io_path_stat": false, 00:18:39.505 "allow_accel_sequence": false, 00:18:39.505 "rdma_max_cq_size": 0, 00:18:39.505 "rdma_cm_event_timeout_ms": 0, 00:18:39.505 "dhchap_digests": [ 00:18:39.505 "sha256", 00:18:39.505 "sha384", 00:18:39.505 "sha512" 00:18:39.505 ], 00:18:39.505 "dhchap_dhgroups": [ 00:18:39.505 "null", 00:18:39.505 "ffdhe2048", 00:18:39.505 "ffdhe3072", 00:18:39.505 "ffdhe4096", 00:18:39.505 "ffdhe6144", 00:18:39.505 "ffdhe8192" 00:18:39.505 ] 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "bdev_nvme_set_hotplug", 00:18:39.505 "params": { 00:18:39.505 "period_us": 100000, 00:18:39.505 "enable": false 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "bdev_malloc_create", 00:18:39.505 "params": { 00:18:39.505 "name": "malloc0", 00:18:39.505 "num_blocks": 8192, 00:18:39.505 "block_size": 4096, 00:18:39.505 "physical_block_size": 4096, 00:18:39.505 "uuid": "4f4fec5d-b0fe-4121-be09-1aa3f06e1e94", 00:18:39.505 "optimal_io_boundary": 0 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "bdev_wait_for_examine" 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "nbd", 00:18:39.505 "config": [] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "scheduler", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "framework_set_scheduler", 00:18:39.505 "params": { 00:18:39.505 "name": "static" 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "nvmf", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "nvmf_set_config", 00:18:39.505 "params": { 00:18:39.505 "discovery_filter": "match_any", 00:18:39.505 "admin_cmd_passthru": { 00:18:39.505 "identify_ctrlr": false 00:18:39.505 } 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "nvmf_set_max_subsystems", 00:18:39.505 "params": { 00:18:39.505 "max_subsystems": 1024 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "nvmf_set_crdt", 00:18:39.505 "params": { 00:18:39.505 "crdt1": 0, 00:18:39.505 "crdt2": 0, 00:18:39.505 "crdt3": 0 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "nvmf_create_transport", 00:18:39.505 "params": { 00:18:39.505 "trtype": "TCP", 00:18:39.505 "max_queue_depth": 128, 00:18:39.505 "max_io_qpairs_per_ctrlr": 127, 00:18:39.505 "in_capsule_data_size": 4096, 00:18:39.505 "max_io_size": 131072, 00:18:39.505 "io_unit_size": 131072, 00:18:39.505 "max_aq_depth": 128, 00:18:39.505 "num_shared_buffers": 511, 00:18:39.505 "buf_cache_size": 4294967295, 00:18:39.505 "dif_insert_or_strip": false, 00:18:39.505 "zcopy": false, 00:18:39.505 "c2h_success": false, 00:18:39.505 "sock_priority": 0, 00:18:39.505 "abort_timeout_sec": 1, 00:18:39.505 "ack_timeout": 0, 00:18:39.505 "data_wr_pool_size": 0 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "nvmf_create_subsystem", 00:18:39.505 "params": { 00:18:39.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.505 "allow_any_host": false, 00:18:39.505 "serial_number": "00000000000000000000", 00:18:39.505 "model_number": "SPDK bdev Controller", 00:18:39.505 "max_namespaces": 32, 00:18:39.505 "min_cntlid": 1, 00:18:39.505 "max_cntlid": 65519, 00:18:39.505 "ana_reporting": false 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "nvmf_subsystem_add_host", 00:18:39.505 "params": { 00:18:39.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.505 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.505 "psk": "key0" 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "nvmf_subsystem_add_ns", 00:18:39.505 "params": { 00:18:39.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.505 "namespace": { 00:18:39.505 "nsid": 1, 00:18:39.505 "bdev_name": "malloc0", 00:18:39.505 "nguid": "4F4FEC5DB0FE4121BE091AA3F06E1E94", 00:18:39.505 "uuid": "4f4fec5d-b0fe-4121-be09-1aa3f06e1e94", 00:18:39.505 "no_auto_visible": false 00:18:39.505 } 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "nvmf_subsystem_add_listener", 00:18:39.505 "params": { 00:18:39.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.505 "listen_address": { 00:18:39.505 "trtype": "TCP", 00:18:39.505 "adrfam": "IPv4", 00:18:39.505 "traddr": "10.0.0.2", 00:18:39.505 "trsvcid": "4420" 00:18:39.505 }, 00:18:39.505 "secure_channel": true 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }' 00:18:39.505 11:04:35 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:39.505 11:04:36 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:39.505 "subsystems": [ 00:18:39.505 { 00:18:39.505 "subsystem": "keyring", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "keyring_file_add_key", 00:18:39.505 "params": { 00:18:39.505 "name": "key0", 00:18:39.505 "path": "/tmp/tmp.1w7s70vSTv" 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "iobuf", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "iobuf_set_options", 00:18:39.505 "params": { 00:18:39.505 "small_pool_count": 8192, 00:18:39.505 "large_pool_count": 1024, 00:18:39.505 "small_bufsize": 8192, 00:18:39.505 "large_bufsize": 135168 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "sock", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "sock_impl_set_options", 00:18:39.505 "params": { 00:18:39.505 "impl_name": "posix", 00:18:39.505 "recv_buf_size": 2097152, 00:18:39.505 "send_buf_size": 2097152, 00:18:39.505 "enable_recv_pipe": true, 00:18:39.505 "enable_quickack": false, 00:18:39.505 "enable_placement_id": 0, 00:18:39.505 "enable_zerocopy_send_server": true, 00:18:39.505 "enable_zerocopy_send_client": false, 00:18:39.505 "zerocopy_threshold": 0, 00:18:39.505 "tls_version": 0, 00:18:39.505 "enable_ktls": false 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "method": "sock_impl_set_options", 00:18:39.505 "params": { 00:18:39.505 "impl_name": "ssl", 00:18:39.505 "recv_buf_size": 4096, 00:18:39.505 "send_buf_size": 4096, 00:18:39.505 "enable_recv_pipe": true, 00:18:39.505 "enable_quickack": false, 00:18:39.505 "enable_placement_id": 0, 00:18:39.505 "enable_zerocopy_send_server": true, 00:18:39.505 "enable_zerocopy_send_client": false, 00:18:39.505 "zerocopy_threshold": 0, 00:18:39.505 "tls_version": 0, 00:18:39.505 "enable_ktls": false 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "vmd", 00:18:39.505 "config": [] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "accel", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "accel_set_options", 00:18:39.505 "params": { 00:18:39.505 "small_cache_size": 128, 00:18:39.505 "large_cache_size": 16, 00:18:39.505 "task_count": 2048, 00:18:39.505 "sequence_count": 2048, 00:18:39.505 "buf_count": 2048 00:18:39.505 } 00:18:39.505 } 00:18:39.505 ] 00:18:39.505 }, 00:18:39.505 { 00:18:39.505 "subsystem": "bdev", 00:18:39.505 "config": [ 00:18:39.505 { 00:18:39.505 "method": "bdev_set_options", 00:18:39.505 "params": { 00:18:39.505 "bdev_io_pool_size": 65535, 00:18:39.505 "bdev_io_cache_size": 256, 00:18:39.505 "bdev_auto_examine": true, 00:18:39.505 "iobuf_small_cache_size": 128, 00:18:39.505 "iobuf_large_cache_size": 16 00:18:39.505 } 00:18:39.505 }, 00:18:39.505 { 00:18:39.506 "method": "bdev_raid_set_options", 00:18:39.506 "params": { 00:18:39.506 "process_window_size_kb": 1024 00:18:39.506 } 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "method": "bdev_iscsi_set_options", 00:18:39.506 "params": { 00:18:39.506 "timeout_sec": 30 00:18:39.506 } 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "method": "bdev_nvme_set_options", 00:18:39.506 "params": { 00:18:39.506 "action_on_timeout": "none", 00:18:39.506 "timeout_us": 0, 00:18:39.506 "timeout_admin_us": 0, 00:18:39.506 "keep_alive_timeout_ms": 10000, 00:18:39.506 "arbitration_burst": 0, 00:18:39.506 "low_priority_weight": 0, 00:18:39.506 "medium_priority_weight": 0, 00:18:39.506 "high_priority_weight": 0, 00:18:39.506 "nvme_adminq_poll_period_us": 10000, 00:18:39.506 "nvme_ioq_poll_period_us": 0, 00:18:39.506 "io_queue_requests": 512, 00:18:39.506 "delay_cmd_submit": true, 00:18:39.506 "transport_retry_count": 4, 00:18:39.506 "bdev_retry_count": 3, 00:18:39.506 "transport_ack_timeout": 0, 00:18:39.506 "ctrlr_loss_timeout_sec": 0, 00:18:39.506 "reconnect_delay_sec": 0, 00:18:39.506 "fast_io_fail_timeout_sec": 0, 00:18:39.506 "disable_auto_failback": false, 00:18:39.506 "generate_uuids": false, 00:18:39.506 "transport_tos": 0, 00:18:39.506 "nvme_error_stat": false, 00:18:39.506 "rdma_srq_size": 0, 00:18:39.506 "io_path_stat": false, 00:18:39.506 "allow_accel_sequence": false, 00:18:39.506 "rdma_max_cq_size": 0, 00:18:39.506 "rdma_cm_event_timeout_ms": 0, 00:18:39.506 "dhchap_digests": [ 00:18:39.506 "sha256", 00:18:39.506 "sha384", 00:18:39.506 "sha512" 00:18:39.506 ], 00:18:39.506 "dhchap_dhgroups": [ 00:18:39.506 "null", 00:18:39.506 "ffdhe2048", 00:18:39.506 "ffdhe3072", 00:18:39.506 "ffdhe4096", 00:18:39.506 "ffdhe6144", 00:18:39.506 "ffdhe8192" 00:18:39.506 ] 00:18:39.506 } 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "method": "bdev_nvme_attach_controller", 00:18:39.506 "params": { 00:18:39.506 "name": "nvme0", 00:18:39.506 "trtype": "TCP", 00:18:39.506 "adrfam": "IPv4", 00:18:39.506 "traddr": "10.0.0.2", 00:18:39.506 "trsvcid": "4420", 00:18:39.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.506 "prchk_reftag": false, 00:18:39.506 "prchk_guard": false, 00:18:39.506 "ctrlr_loss_timeout_sec": 0, 00:18:39.506 "reconnect_delay_sec": 0, 00:18:39.506 "fast_io_fail_timeout_sec": 0, 00:18:39.506 "psk": "key0", 00:18:39.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.506 "hdgst": false, 00:18:39.506 "ddgst": false 00:18:39.506 } 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "method": "bdev_nvme_set_hotplug", 00:18:39.506 "params": { 00:18:39.506 "period_us": 100000, 00:18:39.506 "enable": false 00:18:39.506 } 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "method": "bdev_enable_histogram", 00:18:39.506 "params": { 00:18:39.506 "name": "nvme0n1", 00:18:39.506 "enable": true 00:18:39.506 } 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "method": "bdev_wait_for_examine" 00:18:39.506 } 00:18:39.506 ] 00:18:39.506 }, 00:18:39.506 { 00:18:39.506 "subsystem": "nbd", 00:18:39.506 "config": [] 00:18:39.506 } 00:18:39.506 ] 00:18:39.506 }' 00:18:39.506 11:04:36 -- target/tls.sh@266 -- # killprocess 361269 00:18:39.506 11:04:36 -- common/autotest_common.sh@946 -- # '[' -z 361269 ']' 00:18:39.506 11:04:36 -- common/autotest_common.sh@950 -- # kill -0 361269 00:18:39.506 11:04:36 -- common/autotest_common.sh@951 -- # uname 00:18:39.506 11:04:36 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.506 11:04:36 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 361269 00:18:39.766 11:04:36 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:39.766 11:04:36 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:39.766 11:04:36 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 361269' 00:18:39.766 killing process with pid 361269 00:18:39.766 11:04:36 -- common/autotest_common.sh@965 -- # kill 361269 00:18:39.766 Received shutdown signal, test time was about 1.000000 seconds 00:18:39.766 00:18:39.766 Latency(us) 00:18:39.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.766 =================================================================================================================== 00:18:39.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.766 11:04:36 -- common/autotest_common.sh@970 -- # wait 361269 00:18:39.766 11:04:36 -- target/tls.sh@267 -- # killprocess 361060 00:18:39.766 11:04:36 -- common/autotest_common.sh@946 -- # '[' -z 361060 ']' 00:18:39.766 11:04:36 -- common/autotest_common.sh@950 -- # kill -0 361060 00:18:39.766 11:04:36 -- common/autotest_common.sh@951 -- # uname 00:18:39.766 11:04:36 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.766 11:04:36 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 361060 00:18:39.766 11:04:36 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:39.766 11:04:36 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:39.766 11:04:36 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 361060' 00:18:39.766 killing process with pid 361060 00:18:39.766 11:04:36 -- common/autotest_common.sh@965 -- # kill 361060 00:18:39.766 [2024-05-15 11:04:36.371592] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:39.766 11:04:36 -- common/autotest_common.sh@970 -- # wait 361060 00:18:40.026 11:04:36 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:40.026 11:04:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:40.026 11:04:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:40.026 11:04:36 -- target/tls.sh@269 -- # echo '{ 00:18:40.026 "subsystems": [ 00:18:40.026 { 00:18:40.026 "subsystem": "keyring", 00:18:40.026 "config": [ 00:18:40.026 { 00:18:40.026 "method": "keyring_file_add_key", 00:18:40.026 "params": { 00:18:40.026 "name": "key0", 00:18:40.026 "path": "/tmp/tmp.1w7s70vSTv" 00:18:40.026 } 00:18:40.026 } 00:18:40.026 ] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "iobuf", 00:18:40.026 "config": [ 00:18:40.026 { 00:18:40.026 "method": "iobuf_set_options", 00:18:40.026 "params": { 00:18:40.026 "small_pool_count": 8192, 00:18:40.026 "large_pool_count": 1024, 00:18:40.026 "small_bufsize": 8192, 00:18:40.026 "large_bufsize": 135168 00:18:40.026 } 00:18:40.026 } 00:18:40.026 ] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "sock", 00:18:40.026 "config": [ 00:18:40.026 { 00:18:40.026 "method": "sock_impl_set_options", 00:18:40.026 "params": { 00:18:40.026 "impl_name": "posix", 00:18:40.026 "recv_buf_size": 2097152, 00:18:40.026 "send_buf_size": 2097152, 00:18:40.026 "enable_recv_pipe": true, 00:18:40.026 "enable_quickack": false, 00:18:40.026 "enable_placement_id": 0, 00:18:40.026 "enable_zerocopy_send_server": true, 00:18:40.026 "enable_zerocopy_send_client": false, 00:18:40.026 "zerocopy_threshold": 0, 00:18:40.026 "tls_version": 0, 00:18:40.026 "enable_ktls": false 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "sock_impl_set_options", 00:18:40.026 "params": { 00:18:40.026 "impl_name": "ssl", 00:18:40.026 "recv_buf_size": 4096, 00:18:40.026 "send_buf_size": 4096, 00:18:40.026 "enable_recv_pipe": true, 00:18:40.026 "enable_quickack": false, 00:18:40.026 "enable_placement_id": 0, 00:18:40.026 "enable_zerocopy_send_server": true, 00:18:40.026 "enable_zerocopy_send_client": false, 00:18:40.026 "zerocopy_threshold": 0, 00:18:40.026 "tls_version": 0, 00:18:40.026 "enable_ktls": false 00:18:40.026 } 00:18:40.026 } 00:18:40.026 ] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "vmd", 00:18:40.026 "config": [] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "accel", 00:18:40.026 "config": [ 00:18:40.026 { 00:18:40.026 "method": "accel_set_options", 00:18:40.026 "params": { 00:18:40.026 "small_cache_size": 128, 00:18:40.026 "large_cache_size": 16, 00:18:40.026 "task_count": 2048, 00:18:40.026 "sequence_count": 2048, 00:18:40.026 "buf_count": 2048 00:18:40.026 } 00:18:40.026 } 00:18:40.026 ] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "bdev", 00:18:40.026 "config": [ 00:18:40.026 { 00:18:40.026 "method": "bdev_set_options", 00:18:40.026 "params": { 00:18:40.026 "bdev_io_pool_size": 65535, 00:18:40.026 "bdev_io_cache_size": 256, 00:18:40.026 "bdev_auto_examine": true, 00:18:40.026 "iobuf_small_cache_size": 128, 00:18:40.026 "iobuf_large_cache_size": 16 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "bdev_raid_set_options", 00:18:40.026 "params": { 00:18:40.026 "process_window_size_kb": 1024 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "bdev_iscsi_set_options", 00:18:40.026 "params": { 00:18:40.026 "timeout_sec": 30 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "bdev_nvme_set_options", 00:18:40.026 "params": { 00:18:40.026 "action_on_timeout": "none", 00:18:40.026 "timeout_us": 0, 00:18:40.026 "timeout_admin_us": 0, 00:18:40.026 "keep_alive_timeout_ms": 10000, 00:18:40.026 "arbitration_burst": 0, 00:18:40.026 "low_priority_weight": 0, 00:18:40.026 "medium_priority_weight": 0, 00:18:40.026 "high_priority_weight": 0, 00:18:40.026 "nvme_adminq_poll_period_us": 10000, 00:18:40.026 "nvme_ioq_poll_period_us": 0, 00:18:40.026 "io_queue_requests": 0, 00:18:40.026 "delay_cmd_submit": true, 00:18:40.026 "transport_retry_count": 4, 00:18:40.026 "bdev_retry_count": 3, 00:18:40.026 "transport_ack_timeout": 0, 00:18:40.026 "ctrlr_loss_timeout_sec": 0, 00:18:40.026 "reconnect_delay_sec": 0, 00:18:40.026 "fast_io_fail_timeout_sec": 0, 00:18:40.026 "disable_auto_failback": false, 00:18:40.026 "generate_uuids": false, 00:18:40.026 "transport_tos": 0, 00:18:40.026 "nvme_error_stat": false, 00:18:40.026 "rdma_srq_size": 0, 00:18:40.026 "io_path_stat": false, 00:18:40.026 "allow_accel_sequence": false, 00:18:40.026 "rdma_max_cq_size": 0, 00:18:40.026 "rdma_cm_event_timeout_ms": 0, 00:18:40.026 "dhchap_digests": [ 00:18:40.026 "sha256", 00:18:40.026 "sha384", 00:18:40.026 "sha512" 00:18:40.026 ], 00:18:40.026 "dhchap_dhgroups": [ 00:18:40.026 "null", 00:18:40.026 "ffdhe2048", 00:18:40.026 "ffdhe3072", 00:18:40.026 "ffdhe4096", 00:18:40.026 "ffdhe6144", 00:18:40.026 "ffdhe8192" 00:18:40.026 ] 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "bdev_nvme_set_hotplug", 00:18:40.026 "params": { 00:18:40.026 "period_us": 100000, 00:18:40.026 "enable": false 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "bdev_malloc_create", 00:18:40.026 "params": { 00:18:40.026 "name": "malloc0", 00:18:40.026 "num_blocks": 8192, 00:18:40.026 "block_size": 4096, 00:18:40.026 "physical_block_size": 4096, 00:18:40.026 "uuid": "4f4fec5d-b0fe-4121-be09-1aa3f06e1e94", 00:18:40.026 "optimal_io_boundary": 0 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "bdev_wait_for_examine" 00:18:40.026 } 00:18:40.026 ] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "nbd", 00:18:40.026 "config": [] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "scheduler", 00:18:40.026 "config": [ 00:18:40.026 { 00:18:40.026 "method": "framework_set_scheduler", 00:18:40.026 "params": { 00:18:40.026 "name": "static" 00:18:40.026 } 00:18:40.026 } 00:18:40.026 ] 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "subsystem": "nvmf", 00:18:40.026 "config": [ 00:18:40.026 { 00:18:40.026 "method": "nvmf_set_config", 00:18:40.026 "params": { 00:18:40.026 "discovery_filter": "match_any", 00:18:40.026 "admin_cmd_passthru": { 00:18:40.026 "identify_ctrlr": false 00:18:40.026 } 00:18:40.026 } 00:18:40.026 }, 00:18:40.026 { 00:18:40.026 "method": "nvmf_set_max_subsystems", 00:18:40.026 "params": { 00:18:40.027 "max_subsystems": 1024 00:18:40.027 } 00:18:40.027 }, 00:18:40.027 { 00:18:40.027 "method": "nvmf_set_crdt", 00:18:40.027 "params": { 00:18:40.027 "crdt1": 0, 00:18:40.027 "crdt2": 0, 00:18:40.027 "crdt3": 0 00:18:40.027 } 00:18:40.027 }, 00:18:40.027 { 00:18:40.027 "method": "nvmf_create_transport", 00:18:40.027 "params": { 00:18:40.027 "trtype": "TCP", 00:18:40.027 "max_queue_depth": 128, 00:18:40.027 "max_io_qpairs_per_ctrlr": 127, 00:18:40.027 "in_capsule_data_size": 4096, 00:18:40.027 "max_io_size": 131072, 00:18:40.027 "io_unit_size": 131072, 00:18:40.027 "max_aq_depth": 128, 00:18:40.027 "num_shared_buffers": 511, 00:18:40.027 "buf_cache_size": 4294967295, 00:18:40.027 "dif_insert_or_strip": false, 00:18:40.027 "zcopy": false, 00:18:40.027 "c2h_success": false, 00:18:40.027 "sock_priority": 0, 00:18:40.027 "abort_timeout_sec": 1, 00:18:40.027 "ack_timeout": 0, 00:18:40.027 "data_wr_pool_size": 0 00:18:40.027 } 00:18:40.027 }, 00:18:40.027 { 00:18:40.027 "method": "nvmf_create_subsystem", 00:18:40.027 "params": { 00:18:40.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.027 "allow_any_host": false, 00:18:40.027 "serial_number": "00000000000000000000", 00:18:40.027 "model_number": "SPDK bdev Controller", 00:18:40.027 "max_namespaces": 32, 00:18:40.027 "min_cntlid": 1, 00:18:40.027 "max_cntlid": 65519, 00:18:40.027 "ana_reporting": false 00:18:40.027 } 00:18:40.027 }, 00:18:40.027 { 00:18:40.027 "method": "nvmf_subsystem_add_host", 00:18:40.027 "params": { 00:18:40.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.027 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.027 "psk": "key0" 00:18:40.027 } 00:18:40.027 }, 00:18:40.027 { 00:18:40.027 "method": "nvmf_subsystem_add_ns", 00:18:40.027 "params": { 00:18:40.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.027 "namespace": { 00:18:40.027 "nsid": 1, 00:18:40.027 "bdev_name": "malloc0", 00:18:40.027 "nguid": "4F4FEC5DB0FE4121BE091AA3F06E1E94", 00:18:40.027 "uuid": "4f4fec5d-b0fe-4121-be09-1aa3f06e1e94", 00:18:40.027 "no_auto_visible": false 00:18:40.027 } 00:18:40.027 } 00:18:40.027 }, 00:18:40.027 { 00:18:40.027 "method": "nvmf_subsystem_add_listener", 00:18:40.027 "params": { 00:18:40.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.027 "listen_address": { 00:18:40.027 "trtype": "TCP", 00:18:40.027 "adrfam": "IPv4", 00:18:40.027 "traddr": "10.0.0.2", 00:18:40.027 "trsvcid": "4420" 00:18:40.027 }, 00:18:40.027 "secure_channel": true 00:18:40.027 } 00:18:40.027 } 00:18:40.027 ] 00:18:40.027 } 00:18:40.027 ] 00:18:40.027 }' 00:18:40.027 11:04:36 -- common/autotest_common.sh@10 -- # set +x 00:18:40.027 11:04:36 -- nvmf/common.sh@470 -- # nvmfpid=361954 00:18:40.027 11:04:36 -- nvmf/common.sh@471 -- # waitforlisten 361954 00:18:40.027 11:04:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:40.027 11:04:36 -- common/autotest_common.sh@827 -- # '[' -z 361954 ']' 00:18:40.027 11:04:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.027 11:04:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:40.027 11:04:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.027 11:04:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:40.027 11:04:36 -- common/autotest_common.sh@10 -- # set +x 00:18:40.027 [2024-05-15 11:04:36.566996] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:40.027 [2024-05-15 11:04:36.567048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.027 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.027 [2024-05-15 11:04:36.631695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.288 [2024-05-15 11:04:36.698085] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.288 [2024-05-15 11:04:36.698116] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.288 [2024-05-15 11:04:36.698122] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.288 [2024-05-15 11:04:36.698126] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.288 [2024-05-15 11:04:36.698131] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.288 [2024-05-15 11:04:36.698177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.288 [2024-05-15 11:04:36.887072] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.288 [2024-05-15 11:04:36.919055] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:40.288 [2024-05-15 11:04:36.919103] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.288 [2024-05-15 11:04:36.930861] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.858 11:04:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.858 11:04:37 -- common/autotest_common.sh@860 -- # return 0 00:18:40.858 11:04:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:40.858 11:04:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.858 11:04:37 -- common/autotest_common.sh@10 -- # set +x 00:18:40.858 11:04:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.858 11:04:37 -- target/tls.sh@272 -- # bdevperf_pid=362011 00:18:40.858 11:04:37 -- target/tls.sh@273 -- # waitforlisten 362011 /var/tmp/bdevperf.sock 00:18:40.858 11:04:37 -- common/autotest_common.sh@827 -- # '[' -z 362011 ']' 00:18:40.858 11:04:37 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.858 11:04:37 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:40.858 11:04:37 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.858 11:04:37 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:40.858 11:04:37 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:40.858 11:04:37 -- common/autotest_common.sh@10 -- # set +x 00:18:40.858 11:04:37 -- target/tls.sh@270 -- # echo '{ 00:18:40.858 "subsystems": [ 00:18:40.858 { 00:18:40.858 "subsystem": "keyring", 00:18:40.858 "config": [ 00:18:40.858 { 00:18:40.858 "method": "keyring_file_add_key", 00:18:40.858 "params": { 00:18:40.858 "name": "key0", 00:18:40.858 "path": "/tmp/tmp.1w7s70vSTv" 00:18:40.858 } 00:18:40.858 } 00:18:40.858 ] 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "subsystem": "iobuf", 00:18:40.858 "config": [ 00:18:40.858 { 00:18:40.858 "method": "iobuf_set_options", 00:18:40.858 "params": { 00:18:40.858 "small_pool_count": 8192, 00:18:40.858 "large_pool_count": 1024, 00:18:40.858 "small_bufsize": 8192, 00:18:40.858 "large_bufsize": 135168 00:18:40.858 } 00:18:40.858 } 00:18:40.858 ] 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "subsystem": "sock", 00:18:40.858 "config": [ 00:18:40.858 { 00:18:40.858 "method": "sock_impl_set_options", 00:18:40.858 "params": { 00:18:40.858 "impl_name": "posix", 00:18:40.858 "recv_buf_size": 2097152, 00:18:40.858 "send_buf_size": 2097152, 00:18:40.858 "enable_recv_pipe": true, 00:18:40.858 "enable_quickack": false, 00:18:40.858 "enable_placement_id": 0, 00:18:40.858 "enable_zerocopy_send_server": true, 00:18:40.858 "enable_zerocopy_send_client": false, 00:18:40.858 "zerocopy_threshold": 0, 00:18:40.858 "tls_version": 0, 00:18:40.858 "enable_ktls": false 00:18:40.858 } 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "method": "sock_impl_set_options", 00:18:40.858 "params": { 00:18:40.858 "impl_name": "ssl", 00:18:40.858 "recv_buf_size": 4096, 00:18:40.858 "send_buf_size": 4096, 00:18:40.858 "enable_recv_pipe": true, 00:18:40.858 "enable_quickack": false, 00:18:40.858 "enable_placement_id": 0, 00:18:40.858 "enable_zerocopy_send_server": true, 00:18:40.858 "enable_zerocopy_send_client": false, 00:18:40.858 "zerocopy_threshold": 0, 00:18:40.858 "tls_version": 0, 00:18:40.858 "enable_ktls": false 00:18:40.858 } 00:18:40.858 } 00:18:40.858 ] 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "subsystem": "vmd", 00:18:40.858 "config": [] 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "subsystem": "accel", 00:18:40.858 "config": [ 00:18:40.858 { 00:18:40.858 "method": "accel_set_options", 00:18:40.858 "params": { 00:18:40.858 "small_cache_size": 128, 00:18:40.858 "large_cache_size": 16, 00:18:40.858 "task_count": 2048, 00:18:40.858 "sequence_count": 2048, 00:18:40.858 "buf_count": 2048 00:18:40.858 } 00:18:40.858 } 00:18:40.858 ] 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "subsystem": "bdev", 00:18:40.858 "config": [ 00:18:40.858 { 00:18:40.858 "method": "bdev_set_options", 00:18:40.858 "params": { 00:18:40.858 "bdev_io_pool_size": 65535, 00:18:40.858 "bdev_io_cache_size": 256, 00:18:40.858 "bdev_auto_examine": true, 00:18:40.858 "iobuf_small_cache_size": 128, 00:18:40.858 "iobuf_large_cache_size": 16 00:18:40.858 } 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "method": "bdev_raid_set_options", 00:18:40.858 "params": { 00:18:40.858 "process_window_size_kb": 1024 00:18:40.858 } 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "method": "bdev_iscsi_set_options", 00:18:40.858 "params": { 00:18:40.858 "timeout_sec": 30 00:18:40.858 } 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "method": "bdev_nvme_set_options", 00:18:40.858 "params": { 00:18:40.858 "action_on_timeout": "none", 00:18:40.858 "timeout_us": 0, 00:18:40.858 "timeout_admin_us": 0, 00:18:40.858 "keep_alive_timeout_ms": 10000, 00:18:40.858 "arbitration_burst": 0, 00:18:40.858 "low_priority_weight": 0, 00:18:40.858 "medium_priority_weight": 0, 00:18:40.858 "high_priority_weight": 0, 00:18:40.858 "nvme_adminq_poll_period_us": 10000, 00:18:40.858 "nvme_ioq_poll_period_us": 0, 00:18:40.858 "io_queue_requests": 512, 00:18:40.858 "delay_cmd_submit": true, 00:18:40.858 "transport_retry_count": 4, 00:18:40.858 "bdev_retry_count": 3, 00:18:40.858 "transport_ack_timeout": 0, 00:18:40.858 "ctrlr_loss_timeout_sec": 0, 00:18:40.858 "reconnect_delay_sec": 0, 00:18:40.858 "fast_io_fail_timeout_sec": 0, 00:18:40.858 "disable_auto_failback": false, 00:18:40.858 "generate_uuids": false, 00:18:40.858 "transport_tos": 0, 00:18:40.858 "nvme_error_stat": false, 00:18:40.858 "rdma_srq_size": 0, 00:18:40.858 "io_path_stat": false, 00:18:40.858 "allow_accel_sequence": false, 00:18:40.858 "rdma_max_cq_size": 0, 00:18:40.858 "rdma_cm_event_timeout_ms": 0, 00:18:40.858 "dhchap_digests": [ 00:18:40.858 "sha256", 00:18:40.858 "sha384", 00:18:40.858 "sha512" 00:18:40.858 ], 00:18:40.858 "dhchap_dhgroups": [ 00:18:40.858 "null", 00:18:40.858 "ffdhe2048", 00:18:40.858 "ffdhe3072", 00:18:40.858 "ffdhe4096", 00:18:40.858 "ffdhe6144", 00:18:40.858 "ffdhe8192" 00:18:40.858 ] 00:18:40.858 } 00:18:40.858 }, 00:18:40.858 { 00:18:40.858 "method": "bdev_nvme_attach_controller", 00:18:40.858 "params": { 00:18:40.858 "name": "nvme0", 00:18:40.858 "trtype": "TCP", 00:18:40.858 "adrfam": "IPv4", 00:18:40.858 "traddr": "10.0.0.2", 00:18:40.858 "trsvcid": "4420", 00:18:40.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.858 "prchk_reftag": false, 00:18:40.858 "prchk_guard": false, 00:18:40.858 "ctrlr_loss_timeout_sec": 0, 00:18:40.858 "reconnect_delay_sec": 0, 00:18:40.858 "fast_io_fail_timeout_sec": 0, 00:18:40.859 "psk": "key0", 00:18:40.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.859 "hdgst": false, 00:18:40.859 "ddgst": false 00:18:40.859 } 00:18:40.859 }, 00:18:40.859 { 00:18:40.859 "method": "bdev_nvme_set_hotplug", 00:18:40.859 "params": { 00:18:40.859 "period_us": 100000, 00:18:40.859 "enable": false 00:18:40.859 } 00:18:40.859 }, 00:18:40.859 { 00:18:40.859 "method": "bdev_enable_histogram", 00:18:40.859 "params": { 00:18:40.859 "name": "nvme0n1", 00:18:40.859 "enable": true 00:18:40.859 } 00:18:40.859 }, 00:18:40.859 { 00:18:40.859 "method": "bdev_wait_for_examine" 00:18:40.859 } 00:18:40.859 ] 00:18:40.859 }, 00:18:40.859 { 00:18:40.859 "subsystem": "nbd", 00:18:40.859 "config": [] 00:18:40.859 } 00:18:40.859 ] 00:18:40.859 }' 00:18:40.859 [2024-05-15 11:04:37.446527] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:40.859 [2024-05-15 11:04:37.446585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362011 ] 00:18:40.859 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.120 [2024-05-15 11:04:37.521876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.120 [2024-05-15 11:04:37.575398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.120 [2024-05-15 11:04:37.701383] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.691 11:04:38 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.691 11:04:38 -- common/autotest_common.sh@860 -- # return 0 00:18:41.691 11:04:38 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:41.691 11:04:38 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:41.951 11:04:38 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.951 11:04:38 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.951 Running I/O for 1 seconds... 00:18:43.333 00:18:43.333 Latency(us) 00:18:43.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.333 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.333 Verification LBA range: start 0x0 length 0x2000 00:18:43.333 nvme0n1 : 1.12 3163.14 12.36 0.00 0.00 38903.60 5652.48 128450.56 00:18:43.333 =================================================================================================================== 00:18:43.333 Total : 3163.14 12.36 0.00 0.00 38903.60 5652.48 128450.56 00:18:43.333 0 00:18:43.333 11:04:39 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:43.333 11:04:39 -- target/tls.sh@279 -- # cleanup 00:18:43.333 11:04:39 -- target/tls.sh@15 -- # process_shm --id 0 00:18:43.333 11:04:39 -- common/autotest_common.sh@804 -- # type=--id 00:18:43.333 11:04:39 -- common/autotest_common.sh@805 -- # id=0 00:18:43.333 11:04:39 -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:43.333 11:04:39 -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:43.333 11:04:39 -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:43.333 11:04:39 -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:43.333 11:04:39 -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:43.333 11:04:39 -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:43.333 nvmf_trace.0 00:18:43.333 11:04:39 -- common/autotest_common.sh@819 -- # return 0 00:18:43.333 11:04:39 -- target/tls.sh@16 -- # killprocess 362011 00:18:43.333 11:04:39 -- common/autotest_common.sh@946 -- # '[' -z 362011 ']' 00:18:43.333 11:04:39 -- common/autotest_common.sh@950 -- # kill -0 362011 00:18:43.333 11:04:39 -- common/autotest_common.sh@951 -- # uname 00:18:43.333 11:04:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:43.333 11:04:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 362011 00:18:43.333 11:04:39 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:43.333 11:04:39 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:43.333 11:04:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 362011' 00:18:43.333 killing process with pid 362011 00:18:43.333 11:04:39 -- common/autotest_common.sh@965 -- # kill 362011 00:18:43.333 Received shutdown signal, test time was about 1.000000 seconds 00:18:43.333 00:18:43.333 Latency(us) 00:18:43.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.333 =================================================================================================================== 00:18:43.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.333 11:04:39 -- common/autotest_common.sh@970 -- # wait 362011 00:18:43.333 11:04:39 -- target/tls.sh@17 -- # nvmftestfini 00:18:43.333 11:04:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:43.333 11:04:39 -- nvmf/common.sh@117 -- # sync 00:18:43.333 11:04:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.333 11:04:39 -- nvmf/common.sh@120 -- # set +e 00:18:43.333 11:04:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.333 11:04:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.333 rmmod nvme_tcp 00:18:43.333 rmmod nvme_fabrics 00:18:43.333 rmmod nvme_keyring 00:18:43.333 11:04:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.333 11:04:39 -- nvmf/common.sh@124 -- # set -e 00:18:43.333 11:04:39 -- nvmf/common.sh@125 -- # return 0 00:18:43.333 11:04:39 -- nvmf/common.sh@478 -- # '[' -n 361954 ']' 00:18:43.333 11:04:39 -- nvmf/common.sh@479 -- # killprocess 361954 00:18:43.333 11:04:39 -- common/autotest_common.sh@946 -- # '[' -z 361954 ']' 00:18:43.333 11:04:39 -- common/autotest_common.sh@950 -- # kill -0 361954 00:18:43.333 11:04:39 -- common/autotest_common.sh@951 -- # uname 00:18:43.333 11:04:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:43.333 11:04:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 361954 00:18:43.333 11:04:39 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:43.333 11:04:39 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:43.333 11:04:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 361954' 00:18:43.333 killing process with pid 361954 00:18:43.333 11:04:39 -- common/autotest_common.sh@965 -- # kill 361954 00:18:43.333 [2024-05-15 11:04:39.951414] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:43.333 11:04:39 -- common/autotest_common.sh@970 -- # wait 361954 00:18:43.594 11:04:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:43.594 11:04:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:43.594 11:04:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:43.594 11:04:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.594 11:04:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.594 11:04:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.594 11:04:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.594 11:04:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.503 11:04:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.764 11:04:42 -- target/tls.sh@18 -- # rm -f /tmp/tmp.WP5Ti1vEmP /tmp/tmp.7CHCe5c1Vn /tmp/tmp.1w7s70vSTv 00:18:45.764 00:18:45.764 real 1m22.894s 00:18:45.764 user 2m11.096s 00:18:45.764 sys 0m23.850s 00:18:45.764 11:04:42 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.764 11:04:42 -- common/autotest_common.sh@10 -- # set +x 00:18:45.764 ************************************ 00:18:45.764 END TEST nvmf_tls 00:18:45.764 ************************************ 00:18:45.764 11:04:42 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.764 11:04:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.764 11:04:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.764 11:04:42 -- common/autotest_common.sh@10 -- # set +x 00:18:45.764 ************************************ 00:18:45.764 START TEST nvmf_fips 00:18:45.764 ************************************ 00:18:45.764 11:04:42 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.764 * Looking for test storage... 00:18:45.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:45.764 11:04:42 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.764 11:04:42 -- nvmf/common.sh@7 -- # uname -s 00:18:45.764 11:04:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.764 11:04:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.764 11:04:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.764 11:04:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.764 11:04:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.764 11:04:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.764 11:04:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.764 11:04:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.764 11:04:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.764 11:04:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.764 11:04:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.764 11:04:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.764 11:04:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.764 11:04:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.764 11:04:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.764 11:04:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.764 11:04:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.764 11:04:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.764 11:04:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.764 11:04:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.764 11:04:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.764 11:04:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.764 11:04:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.764 11:04:42 -- paths/export.sh@5 -- # export PATH 00:18:45.764 11:04:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.764 11:04:42 -- nvmf/common.sh@47 -- # : 0 00:18:45.764 11:04:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.764 11:04:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.764 11:04:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.764 11:04:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.764 11:04:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.764 11:04:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.764 11:04:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.764 11:04:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.764 11:04:42 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.764 11:04:42 -- fips/fips.sh@89 -- # check_openssl_version 00:18:45.764 11:04:42 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:45.764 11:04:42 -- fips/fips.sh@85 -- # openssl version 00:18:45.764 11:04:42 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:46.024 11:04:42 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:46.024 11:04:42 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:46.024 11:04:42 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:46.024 11:04:42 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:46.024 11:04:42 -- scripts/common.sh@333 -- # IFS=.-: 00:18:46.024 11:04:42 -- scripts/common.sh@333 -- # read -ra ver1 00:18:46.024 11:04:42 -- scripts/common.sh@334 -- # IFS=.-: 00:18:46.024 11:04:42 -- scripts/common.sh@334 -- # read -ra ver2 00:18:46.024 11:04:42 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:46.024 11:04:42 -- scripts/common.sh@337 -- # ver1_l=3 00:18:46.024 11:04:42 -- scripts/common.sh@338 -- # ver2_l=3 00:18:46.024 11:04:42 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:46.024 11:04:42 -- scripts/common.sh@341 -- # case "$op" in 00:18:46.025 11:04:42 -- scripts/common.sh@345 -- # : 1 00:18:46.025 11:04:42 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:46.025 11:04:42 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.025 11:04:42 -- scripts/common.sh@362 -- # decimal 3 00:18:46.025 11:04:42 -- scripts/common.sh@350 -- # local d=3 00:18:46.025 11:04:42 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:46.025 11:04:42 -- scripts/common.sh@352 -- # echo 3 00:18:46.025 11:04:42 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:46.025 11:04:42 -- scripts/common.sh@363 -- # decimal 3 00:18:46.025 11:04:42 -- scripts/common.sh@350 -- # local d=3 00:18:46.025 11:04:42 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:46.025 11:04:42 -- scripts/common.sh@352 -- # echo 3 00:18:46.025 11:04:42 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:46.025 11:04:42 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:46.025 11:04:42 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:46.025 11:04:42 -- scripts/common.sh@361 -- # (( v++ )) 00:18:46.025 11:04:42 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.025 11:04:42 -- scripts/common.sh@362 -- # decimal 0 00:18:46.025 11:04:42 -- scripts/common.sh@350 -- # local d=0 00:18:46.025 11:04:42 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:46.025 11:04:42 -- scripts/common.sh@352 -- # echo 0 00:18:46.025 11:04:42 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:46.025 11:04:42 -- scripts/common.sh@363 -- # decimal 0 00:18:46.025 11:04:42 -- scripts/common.sh@350 -- # local d=0 00:18:46.025 11:04:42 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:46.025 11:04:42 -- scripts/common.sh@352 -- # echo 0 00:18:46.025 11:04:42 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:46.025 11:04:42 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:46.025 11:04:42 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:46.025 11:04:42 -- scripts/common.sh@361 -- # (( v++ )) 00:18:46.025 11:04:42 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.025 11:04:42 -- scripts/common.sh@362 -- # decimal 9 00:18:46.025 11:04:42 -- scripts/common.sh@350 -- # local d=9 00:18:46.025 11:04:42 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:46.025 11:04:42 -- scripts/common.sh@352 -- # echo 9 00:18:46.025 11:04:42 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:46.025 11:04:42 -- scripts/common.sh@363 -- # decimal 0 00:18:46.025 11:04:42 -- scripts/common.sh@350 -- # local d=0 00:18:46.025 11:04:42 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:46.025 11:04:42 -- scripts/common.sh@352 -- # echo 0 00:18:46.025 11:04:42 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:46.025 11:04:42 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:46.025 11:04:42 -- scripts/common.sh@364 -- # return 0 00:18:46.025 11:04:42 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:46.025 11:04:42 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:46.025 11:04:42 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:46.025 11:04:42 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:46.025 11:04:42 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:46.025 11:04:42 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:46.025 11:04:42 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:46.025 11:04:42 -- fips/fips.sh@113 -- # build_openssl_config 00:18:46.025 11:04:42 -- fips/fips.sh@37 -- # cat 00:18:46.025 11:04:42 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:46.025 11:04:42 -- fips/fips.sh@58 -- # cat - 00:18:46.025 11:04:42 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:46.025 11:04:42 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:46.025 11:04:42 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:46.025 11:04:42 -- fips/fips.sh@116 -- # openssl list -providers 00:18:46.025 11:04:42 -- fips/fips.sh@116 -- # grep name 00:18:46.025 11:04:42 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:46.025 11:04:42 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:46.025 11:04:42 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:46.025 11:04:42 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:46.025 11:04:42 -- common/autotest_common.sh@648 -- # local es=0 00:18:46.025 11:04:42 -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:46.025 11:04:42 -- fips/fips.sh@127 -- # : 00:18:46.025 11:04:42 -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:46.025 11:04:42 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.025 11:04:42 -- common/autotest_common.sh@640 -- # type -t openssl 00:18:46.025 11:04:42 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.025 11:04:42 -- common/autotest_common.sh@642 -- # type -P openssl 00:18:46.025 11:04:42 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.025 11:04:42 -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:46.025 11:04:42 -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:46.025 11:04:42 -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:46.025 Error setting digest 00:18:46.025 004283E34B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:46.025 004283E34B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:46.025 11:04:42 -- common/autotest_common.sh@651 -- # es=1 00:18:46.025 11:04:42 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:46.025 11:04:42 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:46.025 11:04:42 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:46.025 11:04:42 -- fips/fips.sh@130 -- # nvmftestinit 00:18:46.025 11:04:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:46.025 11:04:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.025 11:04:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:46.025 11:04:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:46.025 11:04:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:46.025 11:04:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.025 11:04:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.025 11:04:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.025 11:04:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:46.025 11:04:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:46.025 11:04:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.025 11:04:42 -- common/autotest_common.sh@10 -- # set +x 00:18:54.154 11:04:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:54.154 11:04:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.154 11:04:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.154 11:04:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.154 11:04:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.154 11:04:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.154 11:04:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.154 11:04:49 -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.154 11:04:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.154 11:04:49 -- nvmf/common.sh@296 -- # e810=() 00:18:54.154 11:04:49 -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.154 11:04:49 -- nvmf/common.sh@297 -- # x722=() 00:18:54.154 11:04:49 -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.154 11:04:49 -- nvmf/common.sh@298 -- # mlx=() 00:18:54.154 11:04:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.154 11:04:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.154 11:04:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.154 11:04:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.154 11:04:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.154 11:04:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.154 11:04:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.154 11:04:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.154 11:04:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.155 11:04:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.155 11:04:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.155 11:04:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.155 11:04:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.155 11:04:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.155 11:04:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.155 11:04:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.155 11:04:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:54.155 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:54.155 11:04:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.155 11:04:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:54.155 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:54.155 11:04:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.155 11:04:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.155 11:04:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.155 11:04:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:54.155 11:04:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.155 11:04:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:54.155 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:54.155 11:04:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.155 11:04:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.155 11:04:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.155 11:04:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:54.155 11:04:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.155 11:04:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:54.155 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:54.155 11:04:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.155 11:04:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:54.155 11:04:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:54.155 11:04:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:54.155 11:04:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.155 11:04:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.155 11:04:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.155 11:04:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:54.155 11:04:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.155 11:04:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.155 11:04:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:54.155 11:04:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.155 11:04:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.155 11:04:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:54.155 11:04:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:54.155 11:04:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.155 11:04:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.155 11:04:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.155 11:04:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.155 11:04:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:54.155 11:04:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.155 11:04:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.155 11:04:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.155 11:04:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:54.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:18:54.155 00:18:54.155 --- 10.0.0.2 ping statistics --- 00:18:54.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.155 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:18:54.155 11:04:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:18:54.155 00:18:54.155 --- 10.0.0.1 ping statistics --- 00:18:54.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.155 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:18:54.155 11:04:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.155 11:04:49 -- nvmf/common.sh@411 -- # return 0 00:18:54.155 11:04:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:54.155 11:04:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.155 11:04:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:54.155 11:04:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.155 11:04:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:54.155 11:04:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:54.155 11:04:49 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:54.155 11:04:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:54.155 11:04:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:54.155 11:04:49 -- common/autotest_common.sh@10 -- # set +x 00:18:54.155 11:04:49 -- nvmf/common.sh@470 -- # nvmfpid=366692 00:18:54.155 11:04:49 -- nvmf/common.sh@471 -- # waitforlisten 366692 00:18:54.155 11:04:49 -- common/autotest_common.sh@827 -- # '[' -z 366692 ']' 00:18:54.155 11:04:49 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.155 11:04:49 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.155 11:04:49 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.155 11:04:49 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.155 11:04:49 -- common/autotest_common.sh@10 -- # set +x 00:18:54.155 11:04:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.155 [2024-05-15 11:04:49.648683] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:54.155 [2024-05-15 11:04:49.648756] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.155 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.155 [2024-05-15 11:04:49.734364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.155 [2024-05-15 11:04:49.827461] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.155 [2024-05-15 11:04:49.827512] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.155 [2024-05-15 11:04:49.827520] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.155 [2024-05-15 11:04:49.827526] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.155 [2024-05-15 11:04:49.827533] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.155 [2024-05-15 11:04:49.827563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.155 11:04:50 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.155 11:04:50 -- common/autotest_common.sh@860 -- # return 0 00:18:54.155 11:04:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:54.155 11:04:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.155 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:18:54.155 11:04:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.155 11:04:50 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:54.155 11:04:50 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:54.155 11:04:50 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.155 11:04:50 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:54.155 11:04:50 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.155 11:04:50 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.155 11:04:50 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.155 11:04:50 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.155 [2024-05-15 11:04:50.606311] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.155 [2024-05-15 11:04:50.622275] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:54.155 [2024-05-15 11:04:50.622333] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.155 [2024-05-15 11:04:50.622560] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.155 [2024-05-15 11:04:50.652363] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:54.155 malloc0 00:18:54.155 11:04:50 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.155 11:04:50 -- fips/fips.sh@147 -- # bdevperf_pid=367029 00:18:54.155 11:04:50 -- fips/fips.sh@148 -- # waitforlisten 367029 /var/tmp/bdevperf.sock 00:18:54.155 11:04:50 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.155 11:04:50 -- common/autotest_common.sh@827 -- # '[' -z 367029 ']' 00:18:54.155 11:04:50 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.155 11:04:50 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.155 11:04:50 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.155 11:04:50 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.155 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:18:54.155 [2024-05-15 11:04:50.743487] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:18:54.156 [2024-05-15 11:04:50.743566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367029 ] 00:18:54.156 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.156 [2024-05-15 11:04:50.799787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.416 [2024-05-15 11:04:50.864309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.986 11:04:51 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.986 11:04:51 -- common/autotest_common.sh@860 -- # return 0 00:18:54.986 11:04:51 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:55.247 [2024-05-15 11:04:51.648918] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.247 [2024-05-15 11:04:51.648989] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:55.247 TLSTESTn1 00:18:55.247 11:04:51 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:55.247 Running I/O for 10 seconds... 00:19:05.244 00:19:05.244 Latency(us) 00:19:05.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.244 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.244 Verification LBA range: start 0x0 length 0x2000 00:19:05.244 TLSTESTn1 : 10.01 5319.47 20.78 0.00 0.00 24029.03 4915.20 93498.03 00:19:05.244 =================================================================================================================== 00:19:05.244 Total : 5319.47 20.78 0.00 0.00 24029.03 4915.20 93498.03 00:19:05.244 0 00:19:05.244 11:05:01 -- fips/fips.sh@1 -- # cleanup 00:19:05.244 11:05:01 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:05.244 11:05:01 -- common/autotest_common.sh@804 -- # type=--id 00:19:05.244 11:05:01 -- common/autotest_common.sh@805 -- # id=0 00:19:05.244 11:05:01 -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:19:05.244 11:05:01 -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:05.505 11:05:01 -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:19:05.505 11:05:01 -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:19:05.505 11:05:01 -- common/autotest_common.sh@816 -- # for n in $shm_files 00:19:05.505 11:05:01 -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:05.505 nvmf_trace.0 00:19:05.505 11:05:01 -- common/autotest_common.sh@819 -- # return 0 00:19:05.505 11:05:01 -- fips/fips.sh@16 -- # killprocess 367029 00:19:05.505 11:05:01 -- common/autotest_common.sh@946 -- # '[' -z 367029 ']' 00:19:05.505 11:05:01 -- common/autotest_common.sh@950 -- # kill -0 367029 00:19:05.505 11:05:01 -- common/autotest_common.sh@951 -- # uname 00:19:05.505 11:05:01 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.505 11:05:01 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 367029 00:19:05.505 11:05:02 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:05.505 11:05:02 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:05.505 11:05:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 367029' 00:19:05.505 killing process with pid 367029 00:19:05.505 11:05:02 -- common/autotest_common.sh@965 -- # kill 367029 00:19:05.505 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.505 00:19:05.505 Latency(us) 00:19:05.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.505 =================================================================================================================== 00:19:05.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.505 [2024-05-15 11:05:02.041502] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:05.505 11:05:02 -- common/autotest_common.sh@970 -- # wait 367029 00:19:05.505 11:05:02 -- fips/fips.sh@17 -- # nvmftestfini 00:19:05.505 11:05:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:05.505 11:05:02 -- nvmf/common.sh@117 -- # sync 00:19:05.505 11:05:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.505 11:05:02 -- nvmf/common.sh@120 -- # set +e 00:19:05.505 11:05:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.505 11:05:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.766 rmmod nvme_tcp 00:19:05.766 rmmod nvme_fabrics 00:19:05.766 rmmod nvme_keyring 00:19:05.766 11:05:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.766 11:05:02 -- nvmf/common.sh@124 -- # set -e 00:19:05.766 11:05:02 -- nvmf/common.sh@125 -- # return 0 00:19:05.766 11:05:02 -- nvmf/common.sh@478 -- # '[' -n 366692 ']' 00:19:05.766 11:05:02 -- nvmf/common.sh@479 -- # killprocess 366692 00:19:05.766 11:05:02 -- common/autotest_common.sh@946 -- # '[' -z 366692 ']' 00:19:05.766 11:05:02 -- common/autotest_common.sh@950 -- # kill -0 366692 00:19:05.766 11:05:02 -- common/autotest_common.sh@951 -- # uname 00:19:05.766 11:05:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.766 11:05:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 366692 00:19:05.766 11:05:02 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:05.766 11:05:02 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:05.766 11:05:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 366692' 00:19:05.766 killing process with pid 366692 00:19:05.766 11:05:02 -- common/autotest_common.sh@965 -- # kill 366692 00:19:05.766 [2024-05-15 11:05:02.286894] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:05.766 [2024-05-15 11:05:02.286920] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:05.766 11:05:02 -- common/autotest_common.sh@970 -- # wait 366692 00:19:05.766 11:05:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:05.766 11:05:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:05.766 11:05:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:05.766 11:05:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.766 11:05:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:05.766 11:05:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.766 11:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.766 11:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.309 11:05:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.309 11:05:04 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:08.309 00:19:08.309 real 0m22.229s 00:19:08.309 user 0m24.873s 00:19:08.309 sys 0m8.049s 00:19:08.310 11:05:04 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.310 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:19:08.310 ************************************ 00:19:08.310 END TEST nvmf_fips 00:19:08.310 ************************************ 00:19:08.310 11:05:04 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:08.310 11:05:04 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:08.310 11:05:04 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:08.310 11:05:04 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:08.310 11:05:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.310 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:19:14.895 11:05:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:14.895 11:05:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.895 11:05:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.895 11:05:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.895 11:05:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.895 11:05:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.895 11:05:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.895 11:05:11 -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.895 11:05:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.895 11:05:11 -- nvmf/common.sh@296 -- # e810=() 00:19:14.895 11:05:11 -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.895 11:05:11 -- nvmf/common.sh@297 -- # x722=() 00:19:14.895 11:05:11 -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.895 11:05:11 -- nvmf/common.sh@298 -- # mlx=() 00:19:14.895 11:05:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.895 11:05:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.895 11:05:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.895 11:05:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.895 11:05:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.895 11:05:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.895 11:05:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:14.895 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:14.895 11:05:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.895 11:05:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:14.895 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:14.895 11:05:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.895 11:05:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.895 11:05:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.895 11:05:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.895 11:05:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:14.895 11:05:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.895 11:05:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:14.895 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:14.895 11:05:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.895 11:05:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.895 11:05:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.895 11:05:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:14.895 11:05:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.895 11:05:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:14.895 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:14.896 11:05:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.896 11:05:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:14.896 11:05:11 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.896 11:05:11 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:14.896 11:05:11 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:14.896 11:05:11 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:14.896 11:05:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.896 11:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.896 ************************************ 00:19:14.896 START TEST nvmf_perf_adq 00:19:14.896 ************************************ 00:19:14.896 11:05:11 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:14.896 * Looking for test storage... 00:19:14.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.896 11:05:11 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.896 11:05:11 -- nvmf/common.sh@7 -- # uname -s 00:19:14.896 11:05:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.896 11:05:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.896 11:05:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.896 11:05:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.896 11:05:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.896 11:05:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.896 11:05:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.896 11:05:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.896 11:05:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.896 11:05:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.896 11:05:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.896 11:05:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.896 11:05:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.896 11:05:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.896 11:05:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.896 11:05:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.896 11:05:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.896 11:05:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.896 11:05:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.896 11:05:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.896 11:05:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.896 11:05:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.896 11:05:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.896 11:05:11 -- paths/export.sh@5 -- # export PATH 00:19:14.896 11:05:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.896 11:05:11 -- nvmf/common.sh@47 -- # : 0 00:19:14.896 11:05:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.896 11:05:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.896 11:05:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.896 11:05:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.896 11:05:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.896 11:05:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.896 11:05:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.896 11:05:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.896 11:05:11 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:14.896 11:05:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.896 11:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:23.026 11:05:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.026 11:05:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.026 11:05:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.026 11:05:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.026 11:05:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.026 11:05:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.026 11:05:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.026 11:05:18 -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.026 11:05:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.026 11:05:18 -- nvmf/common.sh@296 -- # e810=() 00:19:23.026 11:05:18 -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.026 11:05:18 -- nvmf/common.sh@297 -- # x722=() 00:19:23.026 11:05:18 -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.026 11:05:18 -- nvmf/common.sh@298 -- # mlx=() 00:19:23.026 11:05:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.026 11:05:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.026 11:05:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.026 11:05:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.026 11:05:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.026 11:05:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.026 11:05:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:23.026 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:23.026 11:05:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.026 11:05:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:23.026 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:23.026 11:05:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.026 11:05:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.026 11:05:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.026 11:05:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.026 11:05:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.026 11:05:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.026 11:05:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:23.026 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:23.026 11:05:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.026 11:05:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.027 11:05:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.027 11:05:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.027 11:05:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.027 11:05:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:23.027 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:23.027 11:05:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.027 11:05:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:23.027 11:05:18 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.027 11:05:18 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:23.027 11:05:18 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:23.027 11:05:18 -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:23.027 11:05:18 -- target/perf_adq.sh@53 -- # rmmod ice 00:19:23.027 11:05:19 -- target/perf_adq.sh@54 -- # modprobe ice 00:19:26.318 11:05:22 -- target/perf_adq.sh@55 -- # sleep 5 00:19:31.601 11:05:27 -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:31.601 11:05:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:31.601 11:05:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.601 11:05:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:31.601 11:05:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:31.601 11:05:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:31.601 11:05:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.601 11:05:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.601 11:05:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.601 11:05:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:31.601 11:05:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:31.601 11:05:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:31.601 11:05:27 -- common/autotest_common.sh@10 -- # set +x 00:19:31.601 11:05:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:31.601 11:05:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.601 11:05:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.601 11:05:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.601 11:05:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.601 11:05:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.601 11:05:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.601 11:05:27 -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.601 11:05:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.601 11:05:27 -- nvmf/common.sh@296 -- # e810=() 00:19:31.601 11:05:27 -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.601 11:05:27 -- nvmf/common.sh@297 -- # x722=() 00:19:31.601 11:05:27 -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.601 11:05:27 -- nvmf/common.sh@298 -- # mlx=() 00:19:31.601 11:05:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.601 11:05:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.601 11:05:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.601 11:05:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.602 11:05:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.602 11:05:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.602 11:05:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.602 11:05:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.602 11:05:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:31.602 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:31.602 11:05:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.602 11:05:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:31.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:31.602 11:05:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.602 11:05:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.602 11:05:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.602 11:05:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:31.602 11:05:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.602 11:05:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:31.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:31.602 11:05:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.602 11:05:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.602 11:05:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.602 11:05:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:31.602 11:05:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.602 11:05:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:31.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:31.602 11:05:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.602 11:05:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:31.602 11:05:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:31.602 11:05:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:31.602 11:05:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:31.602 11:05:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.602 11:05:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.602 11:05:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.602 11:05:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.602 11:05:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.602 11:05:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.602 11:05:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.602 11:05:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.602 11:05:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.602 11:05:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.602 11:05:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.602 11:05:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.602 11:05:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.602 11:05:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.602 11:05:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.602 11:05:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.602 11:05:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.602 11:05:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.602 11:05:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.602 11:05:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:19:31.602 00:19:31.602 --- 10.0.0.2 ping statistics --- 00:19:31.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.602 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:19:31.602 11:05:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:19:31.602 00:19:31.602 --- 10.0.0.1 ping statistics --- 00:19:31.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.602 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:19:31.602 11:05:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.602 11:05:28 -- nvmf/common.sh@411 -- # return 0 00:19:31.602 11:05:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:31.602 11:05:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.602 11:05:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:31.602 11:05:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:31.602 11:05:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.602 11:05:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:31.602 11:05:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:31.602 11:05:28 -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:31.602 11:05:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:31.602 11:05:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:31.602 11:05:28 -- common/autotest_common.sh@10 -- # set +x 00:19:31.602 11:05:28 -- nvmf/common.sh@470 -- # nvmfpid=379069 00:19:31.602 11:05:28 -- nvmf/common.sh@471 -- # waitforlisten 379069 00:19:31.602 11:05:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:31.602 11:05:28 -- common/autotest_common.sh@827 -- # '[' -z 379069 ']' 00:19:31.602 11:05:28 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.602 11:05:28 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:31.602 11:05:28 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.602 11:05:28 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:31.602 11:05:28 -- common/autotest_common.sh@10 -- # set +x 00:19:31.602 [2024-05-15 11:05:28.166054] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:19:31.602 [2024-05-15 11:05:28.166131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.602 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.602 [2024-05-15 11:05:28.239485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.863 [2024-05-15 11:05:28.317444] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.863 [2024-05-15 11:05:28.317483] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.863 [2024-05-15 11:05:28.317491] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.863 [2024-05-15 11:05:28.317498] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.863 [2024-05-15 11:05:28.317504] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.863 [2024-05-15 11:05:28.317596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.863 [2024-05-15 11:05:28.317822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.863 [2024-05-15 11:05:28.317823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.863 [2024-05-15 11:05:28.317669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.434 11:05:28 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:32.434 11:05:28 -- common/autotest_common.sh@860 -- # return 0 00:19:32.434 11:05:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:32.434 11:05:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.434 11:05:28 -- common/autotest_common.sh@10 -- # set +x 00:19:32.434 11:05:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.434 11:05:28 -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:32.434 11:05:28 -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:32.434 11:05:28 -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:32.434 11:05:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.434 11:05:28 -- common/autotest_common.sh@10 -- # set +x 00:19:32.434 11:05:28 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.434 11:05:29 -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:32.434 11:05:29 -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:32.434 11:05:29 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.434 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.434 11:05:29 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.434 11:05:29 -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:32.434 11:05:29 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.434 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.695 11:05:29 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.695 11:05:29 -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:32.695 11:05:29 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.695 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.695 [2024-05-15 11:05:29.123792] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.695 11:05:29 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.696 11:05:29 -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:32.696 11:05:29 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.696 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.696 Malloc1 00:19:32.696 11:05:29 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.696 11:05:29 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:32.696 11:05:29 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.696 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.696 11:05:29 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.696 11:05:29 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:32.696 11:05:29 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.696 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.696 11:05:29 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.696 11:05:29 -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.696 11:05:29 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.696 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:32.696 [2024-05-15 11:05:29.182939] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:32.696 [2024-05-15 11:05:29.183168] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.696 11:05:29 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.696 11:05:29 -- target/perf_adq.sh@74 -- # perfpid=379276 00:19:32.696 11:05:29 -- target/perf_adq.sh@75 -- # sleep 2 00:19:32.696 11:05:29 -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:32.696 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.609 11:05:31 -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:34.609 11:05:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.609 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:19:34.609 11:05:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.609 11:05:31 -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:34.609 "tick_rate": 2400000000, 00:19:34.609 "poll_groups": [ 00:19:34.609 { 00:19:34.609 "name": "nvmf_tgt_poll_group_000", 00:19:34.609 "admin_qpairs": 1, 00:19:34.609 "io_qpairs": 1, 00:19:34.609 "current_admin_qpairs": 1, 00:19:34.609 "current_io_qpairs": 1, 00:19:34.609 "pending_bdev_io": 0, 00:19:34.609 "completed_nvme_io": 20484, 00:19:34.609 "transports": [ 00:19:34.609 { 00:19:34.609 "trtype": "TCP" 00:19:34.609 } 00:19:34.609 ] 00:19:34.609 }, 00:19:34.609 { 00:19:34.609 "name": "nvmf_tgt_poll_group_001", 00:19:34.609 "admin_qpairs": 0, 00:19:34.609 "io_qpairs": 1, 00:19:34.609 "current_admin_qpairs": 0, 00:19:34.609 "current_io_qpairs": 1, 00:19:34.609 "pending_bdev_io": 0, 00:19:34.609 "completed_nvme_io": 28435, 00:19:34.609 "transports": [ 00:19:34.609 { 00:19:34.609 "trtype": "TCP" 00:19:34.609 } 00:19:34.609 ] 00:19:34.609 }, 00:19:34.609 { 00:19:34.609 "name": "nvmf_tgt_poll_group_002", 00:19:34.609 "admin_qpairs": 0, 00:19:34.609 "io_qpairs": 1, 00:19:34.609 "current_admin_qpairs": 0, 00:19:34.609 "current_io_qpairs": 1, 00:19:34.609 "pending_bdev_io": 0, 00:19:34.609 "completed_nvme_io": 22348, 00:19:34.609 "transports": [ 00:19:34.609 { 00:19:34.610 "trtype": "TCP" 00:19:34.610 } 00:19:34.610 ] 00:19:34.610 }, 00:19:34.610 { 00:19:34.610 "name": "nvmf_tgt_poll_group_003", 00:19:34.610 "admin_qpairs": 0, 00:19:34.610 "io_qpairs": 1, 00:19:34.610 "current_admin_qpairs": 0, 00:19:34.610 "current_io_qpairs": 1, 00:19:34.610 "pending_bdev_io": 0, 00:19:34.610 "completed_nvme_io": 20788, 00:19:34.610 "transports": [ 00:19:34.610 { 00:19:34.610 "trtype": "TCP" 00:19:34.610 } 00:19:34.610 ] 00:19:34.610 } 00:19:34.610 ] 00:19:34.610 }' 00:19:34.610 11:05:31 -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:34.610 11:05:31 -- target/perf_adq.sh@78 -- # wc -l 00:19:34.870 11:05:31 -- target/perf_adq.sh@78 -- # count=4 00:19:34.870 11:05:31 -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:34.870 11:05:31 -- target/perf_adq.sh@83 -- # wait 379276 00:19:43.007 Initializing NVMe Controllers 00:19:43.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:43.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:43.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:43.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:43.007 Initialization complete. Launching workers. 00:19:43.007 ======================================================== 00:19:43.007 Latency(us) 00:19:43.007 Device Information : IOPS MiB/s Average min max 00:19:43.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14511.77 56.69 4410.77 1108.30 8544.98 00:19:43.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15343.46 59.94 4170.90 1366.58 9391.12 00:19:43.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14217.47 55.54 4501.34 1400.80 9271.48 00:19:43.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11631.50 45.44 5502.46 1314.48 10747.30 00:19:43.007 ======================================================== 00:19:43.007 Total : 55704.20 217.59 4595.77 1108.30 10747.30 00:19:43.007 00:19:43.007 11:05:39 -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:43.007 11:05:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:43.007 11:05:39 -- nvmf/common.sh@117 -- # sync 00:19:43.007 11:05:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.007 11:05:39 -- nvmf/common.sh@120 -- # set +e 00:19:43.007 11:05:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.007 11:05:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.007 rmmod nvme_tcp 00:19:43.007 rmmod nvme_fabrics 00:19:43.007 rmmod nvme_keyring 00:19:43.007 11:05:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.007 11:05:39 -- nvmf/common.sh@124 -- # set -e 00:19:43.007 11:05:39 -- nvmf/common.sh@125 -- # return 0 00:19:43.007 11:05:39 -- nvmf/common.sh@478 -- # '[' -n 379069 ']' 00:19:43.007 11:05:39 -- nvmf/common.sh@479 -- # killprocess 379069 00:19:43.007 11:05:39 -- common/autotest_common.sh@946 -- # '[' -z 379069 ']' 00:19:43.007 11:05:39 -- common/autotest_common.sh@950 -- # kill -0 379069 00:19:43.007 11:05:39 -- common/autotest_common.sh@951 -- # uname 00:19:43.007 11:05:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:43.007 11:05:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 379069 00:19:43.007 11:05:39 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:43.007 11:05:39 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:43.007 11:05:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 379069' 00:19:43.007 killing process with pid 379069 00:19:43.007 11:05:39 -- common/autotest_common.sh@965 -- # kill 379069 00:19:43.007 [2024-05-15 11:05:39.471315] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:43.007 11:05:39 -- common/autotest_common.sh@970 -- # wait 379069 00:19:43.007 11:05:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:43.007 11:05:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:43.007 11:05:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:43.007 11:05:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.007 11:05:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.007 11:05:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.007 11:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.007 11:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.552 11:05:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.552 11:05:41 -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:45.552 11:05:41 -- target/perf_adq.sh@53 -- # rmmod ice 00:19:46.932 11:05:43 -- target/perf_adq.sh@54 -- # modprobe ice 00:19:48.842 11:05:45 -- target/perf_adq.sh@55 -- # sleep 5 00:19:54.127 11:05:50 -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:54.127 11:05:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:54.127 11:05:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.127 11:05:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:54.127 11:05:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:54.127 11:05:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:54.127 11:05:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.127 11:05:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.127 11:05:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.127 11:05:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:54.127 11:05:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.127 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:19:54.127 11:05:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:54.127 11:05:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:54.127 11:05:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:54.127 11:05:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:54.127 11:05:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:54.127 11:05:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:54.127 11:05:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:54.127 11:05:50 -- nvmf/common.sh@295 -- # net_devs=() 00:19:54.127 11:05:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:54.127 11:05:50 -- nvmf/common.sh@296 -- # e810=() 00:19:54.127 11:05:50 -- nvmf/common.sh@296 -- # local -ga e810 00:19:54.127 11:05:50 -- nvmf/common.sh@297 -- # x722=() 00:19:54.127 11:05:50 -- nvmf/common.sh@297 -- # local -ga x722 00:19:54.127 11:05:50 -- nvmf/common.sh@298 -- # mlx=() 00:19:54.127 11:05:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:54.127 11:05:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.127 11:05:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:54.127 11:05:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:54.127 11:05:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:54.127 11:05:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.127 11:05:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:54.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:54.127 11:05:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.127 11:05:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:54.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:54.127 11:05:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:54.127 11:05:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.127 11:05:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.127 11:05:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:54.127 11:05:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.127 11:05:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:54.127 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:54.127 11:05:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.127 11:05:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.127 11:05:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.127 11:05:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:54.127 11:05:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.127 11:05:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:54.127 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:54.127 11:05:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.127 11:05:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:54.127 11:05:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:54.127 11:05:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:54.127 11:05:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.127 11:05:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.127 11:05:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.127 11:05:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:54.127 11:05:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.127 11:05:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.127 11:05:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:54.127 11:05:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.127 11:05:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.127 11:05:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:54.127 11:05:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:54.127 11:05:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.127 11:05:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.127 11:05:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.127 11:05:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.127 11:05:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:54.127 11:05:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.127 11:05:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.127 11:05:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.127 11:05:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:54.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.841 ms 00:19:54.127 00:19:54.127 --- 10.0.0.2 ping statistics --- 00:19:54.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.127 rtt min/avg/max/mdev = 0.841/0.841/0.841/0.000 ms 00:19:54.127 11:05:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:19:54.127 00:19:54.127 --- 10.0.0.1 ping statistics --- 00:19:54.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.127 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:19:54.127 11:05:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.127 11:05:50 -- nvmf/common.sh@411 -- # return 0 00:19:54.127 11:05:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:54.127 11:05:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.127 11:05:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:54.127 11:05:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.127 11:05:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:54.127 11:05:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:54.127 11:05:50 -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:54.127 11:05:50 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:54.127 11:05:50 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:54.127 11:05:50 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:54.127 net.core.busy_poll = 1 00:19:54.127 11:05:50 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:54.127 net.core.busy_read = 1 00:19:54.127 11:05:50 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:54.127 11:05:50 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:54.127 11:05:50 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:54.127 11:05:50 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:54.385 11:05:50 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:54.386 11:05:50 -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:54.386 11:05:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:54.386 11:05:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:54.386 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:19:54.386 11:05:50 -- nvmf/common.sh@470 -- # nvmfpid=384051 00:19:54.386 11:05:50 -- nvmf/common.sh@471 -- # waitforlisten 384051 00:19:54.386 11:05:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:54.386 11:05:50 -- common/autotest_common.sh@827 -- # '[' -z 384051 ']' 00:19:54.386 11:05:50 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.386 11:05:50 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:54.386 11:05:50 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.386 11:05:50 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:54.386 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:19:54.386 [2024-05-15 11:05:50.910292] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:19:54.386 [2024-05-15 11:05:50.910358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.386 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.386 [2024-05-15 11:05:50.980255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.645 [2024-05-15 11:05:51.054231] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.645 [2024-05-15 11:05:51.054270] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.645 [2024-05-15 11:05:51.054277] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.645 [2024-05-15 11:05:51.054283] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.645 [2024-05-15 11:05:51.054289] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.645 [2024-05-15 11:05:51.054430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.645 [2024-05-15 11:05:51.054557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.645 [2024-05-15 11:05:51.054666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.645 [2024-05-15 11:05:51.054667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.214 11:05:51 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:55.214 11:05:51 -- common/autotest_common.sh@860 -- # return 0 00:19:55.214 11:05:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:55.214 11:05:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.214 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 11:05:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.214 11:05:51 -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:55.214 11:05:51 -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:55.214 11:05:51 -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:55.214 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.214 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.214 11:05:51 -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:55.214 11:05:51 -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:55.214 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.214 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.214 11:05:51 -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:55.214 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.214 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.214 11:05:51 -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:55.214 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.214 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 [2024-05-15 11:05:51.865808] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.473 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.473 11:05:51 -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:55.473 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.473 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.473 Malloc1 00:19:55.473 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.473 11:05:51 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.473 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.473 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.473 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.473 11:05:51 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.473 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.473 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.473 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.473 11:05:51 -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.473 11:05:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.473 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.473 [2024-05-15 11:05:51.924980] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:55.473 [2024-05-15 11:05:51.925195] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.473 11:05:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.473 11:05:51 -- target/perf_adq.sh@96 -- # perfpid=384109 00:19:55.473 11:05:51 -- target/perf_adq.sh@97 -- # sleep 2 00:19:55.474 11:05:51 -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:55.474 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.383 11:05:53 -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:57.383 11:05:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.383 11:05:53 -- common/autotest_common.sh@10 -- # set +x 00:19:57.383 11:05:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.383 11:05:53 -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:57.383 "tick_rate": 2400000000, 00:19:57.383 "poll_groups": [ 00:19:57.383 { 00:19:57.383 "name": "nvmf_tgt_poll_group_000", 00:19:57.383 "admin_qpairs": 1, 00:19:57.383 "io_qpairs": 1, 00:19:57.383 "current_admin_qpairs": 1, 00:19:57.383 "current_io_qpairs": 1, 00:19:57.383 "pending_bdev_io": 0, 00:19:57.383 "completed_nvme_io": 26993, 00:19:57.383 "transports": [ 00:19:57.383 { 00:19:57.383 "trtype": "TCP" 00:19:57.383 } 00:19:57.383 ] 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "name": "nvmf_tgt_poll_group_001", 00:19:57.383 "admin_qpairs": 0, 00:19:57.383 "io_qpairs": 3, 00:19:57.383 "current_admin_qpairs": 0, 00:19:57.383 "current_io_qpairs": 3, 00:19:57.383 "pending_bdev_io": 0, 00:19:57.383 "completed_nvme_io": 40970, 00:19:57.383 "transports": [ 00:19:57.383 { 00:19:57.383 "trtype": "TCP" 00:19:57.383 } 00:19:57.383 ] 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "name": "nvmf_tgt_poll_group_002", 00:19:57.383 "admin_qpairs": 0, 00:19:57.383 "io_qpairs": 0, 00:19:57.383 "current_admin_qpairs": 0, 00:19:57.383 "current_io_qpairs": 0, 00:19:57.383 "pending_bdev_io": 0, 00:19:57.383 "completed_nvme_io": 0, 00:19:57.383 "transports": [ 00:19:57.383 { 00:19:57.383 "trtype": "TCP" 00:19:57.383 } 00:19:57.383 ] 00:19:57.383 }, 00:19:57.383 { 00:19:57.383 "name": "nvmf_tgt_poll_group_003", 00:19:57.383 "admin_qpairs": 0, 00:19:57.383 "io_qpairs": 0, 00:19:57.383 "current_admin_qpairs": 0, 00:19:57.383 "current_io_qpairs": 0, 00:19:57.383 "pending_bdev_io": 0, 00:19:57.383 "completed_nvme_io": 0, 00:19:57.383 "transports": [ 00:19:57.383 { 00:19:57.383 "trtype": "TCP" 00:19:57.383 } 00:19:57.383 ] 00:19:57.383 } 00:19:57.383 ] 00:19:57.383 }' 00:19:57.383 11:05:53 -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:57.383 11:05:53 -- target/perf_adq.sh@100 -- # wc -l 00:19:57.383 11:05:54 -- target/perf_adq.sh@100 -- # count=2 00:19:57.383 11:05:54 -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:57.383 11:05:54 -- target/perf_adq.sh@106 -- # wait 384109 00:20:05.518 Initializing NVMe Controllers 00:20:05.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:05.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:05.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:05.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:05.519 Initialization complete. Launching workers. 00:20:05.519 ======================================================== 00:20:05.519 Latency(us) 00:20:05.519 Device Information : IOPS MiB/s Average min max 00:20:05.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7611.40 29.73 8410.53 1302.25 54023.47 00:20:05.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7443.50 29.08 8597.38 1135.09 53872.23 00:20:05.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 18013.09 70.36 3563.42 966.51 46230.44 00:20:05.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7048.60 27.53 9095.08 1155.61 55262.92 00:20:05.519 ======================================================== 00:20:05.519 Total : 40116.59 156.71 6389.03 966.51 55262.92 00:20:05.519 00:20:05.519 11:06:02 -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:05.519 11:06:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:05.519 11:06:02 -- nvmf/common.sh@117 -- # sync 00:20:05.519 11:06:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.519 11:06:02 -- nvmf/common.sh@120 -- # set +e 00:20:05.519 11:06:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.519 11:06:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.519 rmmod nvme_tcp 00:20:05.519 rmmod nvme_fabrics 00:20:05.779 rmmod nvme_keyring 00:20:05.779 11:06:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.779 11:06:02 -- nvmf/common.sh@124 -- # set -e 00:20:05.779 11:06:02 -- nvmf/common.sh@125 -- # return 0 00:20:05.779 11:06:02 -- nvmf/common.sh@478 -- # '[' -n 384051 ']' 00:20:05.779 11:06:02 -- nvmf/common.sh@479 -- # killprocess 384051 00:20:05.779 11:06:02 -- common/autotest_common.sh@946 -- # '[' -z 384051 ']' 00:20:05.779 11:06:02 -- common/autotest_common.sh@950 -- # kill -0 384051 00:20:05.779 11:06:02 -- common/autotest_common.sh@951 -- # uname 00:20:05.779 11:06:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:05.779 11:06:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 384051 00:20:05.779 11:06:02 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:05.779 11:06:02 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:05.779 11:06:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 384051' 00:20:05.779 killing process with pid 384051 00:20:05.779 11:06:02 -- common/autotest_common.sh@965 -- # kill 384051 00:20:05.779 [2024-05-15 11:06:02.268231] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:05.779 11:06:02 -- common/autotest_common.sh@970 -- # wait 384051 00:20:05.779 11:06:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:05.779 11:06:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:05.779 11:06:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:05.779 11:06:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.779 11:06:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.779 11:06:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.779 11:06:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.779 11:06:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.079 11:06:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.079 11:06:05 -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:09.079 00:20:09.079 real 0m54.154s 00:20:09.079 user 2m49.941s 00:20:09.079 sys 0m11.279s 00:20:09.079 11:06:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:09.079 11:06:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.079 ************************************ 00:20:09.079 END TEST nvmf_perf_adq 00:20:09.079 ************************************ 00:20:09.079 11:06:05 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:09.079 11:06:05 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:09.079 11:06:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:09.079 11:06:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.079 ************************************ 00:20:09.079 START TEST nvmf_shutdown 00:20:09.079 ************************************ 00:20:09.079 11:06:05 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:09.079 * Looking for test storage... 00:20:09.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:09.079 11:06:05 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.079 11:06:05 -- nvmf/common.sh@7 -- # uname -s 00:20:09.079 11:06:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.079 11:06:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.079 11:06:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.079 11:06:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.079 11:06:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.079 11:06:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.079 11:06:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.079 11:06:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.079 11:06:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.080 11:06:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.080 11:06:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.080 11:06:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.080 11:06:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.080 11:06:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.080 11:06:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.080 11:06:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.080 11:06:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.080 11:06:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.080 11:06:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.080 11:06:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.080 11:06:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.080 11:06:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.080 11:06:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.080 11:06:05 -- paths/export.sh@5 -- # export PATH 00:20:09.080 11:06:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.080 11:06:05 -- nvmf/common.sh@47 -- # : 0 00:20:09.080 11:06:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.080 11:06:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.080 11:06:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.080 11:06:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.080 11:06:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.080 11:06:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.080 11:06:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.080 11:06:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.080 11:06:05 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:09.080 11:06:05 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:09.080 11:06:05 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:09.080 11:06:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:09.080 11:06:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:09.080 11:06:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.341 ************************************ 00:20:09.341 START TEST nvmf_shutdown_tc1 00:20:09.341 ************************************ 00:20:09.341 11:06:05 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:20:09.341 11:06:05 -- target/shutdown.sh@74 -- # starttarget 00:20:09.341 11:06:05 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:09.341 11:06:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:09.341 11:06:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.341 11:06:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:09.341 11:06:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:09.341 11:06:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:09.341 11:06:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.341 11:06:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.341 11:06:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.341 11:06:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:09.341 11:06:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:09.341 11:06:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.341 11:06:05 -- common/autotest_common.sh@10 -- # set +x 00:20:15.963 11:06:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:15.963 11:06:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:15.963 11:06:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:15.963 11:06:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:15.963 11:06:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:15.963 11:06:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:15.963 11:06:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:15.963 11:06:12 -- nvmf/common.sh@295 -- # net_devs=() 00:20:15.963 11:06:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:15.963 11:06:12 -- nvmf/common.sh@296 -- # e810=() 00:20:15.963 11:06:12 -- nvmf/common.sh@296 -- # local -ga e810 00:20:15.963 11:06:12 -- nvmf/common.sh@297 -- # x722=() 00:20:15.963 11:06:12 -- nvmf/common.sh@297 -- # local -ga x722 00:20:15.963 11:06:12 -- nvmf/common.sh@298 -- # mlx=() 00:20:15.963 11:06:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:15.963 11:06:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.963 11:06:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.223 11:06:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.224 11:06:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.224 11:06:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.224 11:06:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.224 11:06:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.224 11:06:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.224 11:06:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:16.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:16.224 11:06:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.224 11:06:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:16.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:16.224 11:06:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.224 11:06:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.224 11:06:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.224 11:06:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:16.224 11:06:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.224 11:06:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:16.224 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:16.224 11:06:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.224 11:06:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.224 11:06:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.224 11:06:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:16.224 11:06:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.224 11:06:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:16.224 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:16.224 11:06:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.224 11:06:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:16.224 11:06:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:16.224 11:06:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:16.224 11:06:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:16.224 11:06:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.224 11:06:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.224 11:06:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.224 11:06:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.224 11:06:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.224 11:06:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.224 11:06:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.224 11:06:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.224 11:06:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.224 11:06:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.224 11:06:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.224 11:06:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.224 11:06:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.224 11:06:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.224 11:06:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.224 11:06:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.224 11:06:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.485 11:06:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.485 11:06:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.485 11:06:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:20:16.485 00:20:16.485 --- 10.0.0.2 ping statistics --- 00:20:16.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.485 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:20:16.485 11:06:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:20:16.485 00:20:16.485 --- 10.0.0.1 ping statistics --- 00:20:16.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.485 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:20:16.485 11:06:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.485 11:06:12 -- nvmf/common.sh@411 -- # return 0 00:20:16.485 11:06:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:16.485 11:06:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.485 11:06:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:16.485 11:06:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:16.485 11:06:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.485 11:06:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:16.485 11:06:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:16.485 11:06:12 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:16.485 11:06:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:16.485 11:06:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:16.485 11:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:16.485 11:06:12 -- nvmf/common.sh@470 -- # nvmfpid=391239 00:20:16.485 11:06:12 -- nvmf/common.sh@471 -- # waitforlisten 391239 00:20:16.485 11:06:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:16.485 11:06:12 -- common/autotest_common.sh@827 -- # '[' -z 391239 ']' 00:20:16.485 11:06:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.485 11:06:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.485 11:06:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.485 11:06:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.485 11:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:16.485 [2024-05-15 11:06:13.014777] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:16.485 [2024-05-15 11:06:13.014840] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.486 [2024-05-15 11:06:13.102819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.745 [2024-05-15 11:06:13.196910] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.745 [2024-05-15 11:06:13.196968] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.745 [2024-05-15 11:06:13.196977] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.745 [2024-05-15 11:06:13.196983] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.745 [2024-05-15 11:06:13.196989] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.745 [2024-05-15 11:06:13.197114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.745 [2024-05-15 11:06:13.197284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.745 [2024-05-15 11:06:13.197450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.745 [2024-05-15 11:06:13.197451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:17.315 11:06:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.315 11:06:13 -- common/autotest_common.sh@860 -- # return 0 00:20:17.315 11:06:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:17.315 11:06:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.315 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:20:17.315 11:06:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.315 11:06:13 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.315 11:06:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.315 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:20:17.315 [2024-05-15 11:06:13.846013] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.315 11:06:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.315 11:06:13 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:17.315 11:06:13 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:17.315 11:06:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:17.315 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:20:17.315 11:06:13 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.315 11:06:13 -- target/shutdown.sh@28 -- # cat 00:20:17.315 11:06:13 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:17.315 11:06:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.315 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:20:17.315 Malloc1 00:20:17.315 [2024-05-15 11:06:13.946934] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:17.315 [2024-05-15 11:06:13.947141] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.315 Malloc2 00:20:17.574 Malloc3 00:20:17.574 Malloc4 00:20:17.574 Malloc5 00:20:17.574 Malloc6 00:20:17.574 Malloc7 00:20:17.574 Malloc8 00:20:17.836 Malloc9 00:20:17.836 Malloc10 00:20:17.836 11:06:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.836 11:06:14 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:17.836 11:06:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.836 11:06:14 -- common/autotest_common.sh@10 -- # set +x 00:20:17.836 11:06:14 -- target/shutdown.sh@78 -- # perfpid=391505 00:20:17.836 11:06:14 -- target/shutdown.sh@79 -- # waitforlisten 391505 /var/tmp/bdevperf.sock 00:20:17.836 11:06:14 -- common/autotest_common.sh@827 -- # '[' -z 391505 ']' 00:20:17.836 11:06:14 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.836 11:06:14 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:17.836 11:06:14 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.836 11:06:14 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:17.836 11:06:14 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:17.836 11:06:14 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:17.836 11:06:14 -- common/autotest_common.sh@10 -- # set +x 00:20:17.836 11:06:14 -- nvmf/common.sh@521 -- # config=() 00:20:17.836 11:06:14 -- nvmf/common.sh@521 -- # local subsystem config 00:20:17.836 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.836 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.836 { 00:20:17.836 "params": { 00:20:17.836 "name": "Nvme$subsystem", 00:20:17.836 "trtype": "$TEST_TRANSPORT", 00:20:17.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.836 "adrfam": "ipv4", 00:20:17.836 "trsvcid": "$NVMF_PORT", 00:20:17.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.836 "hdgst": ${hdgst:-false}, 00:20:17.836 "ddgst": ${ddgst:-false} 00:20:17.836 }, 00:20:17.836 "method": "bdev_nvme_attach_controller" 00:20:17.836 } 00:20:17.836 EOF 00:20:17.836 )") 00:20:17.836 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.836 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.836 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.836 { 00:20:17.836 "params": { 00:20:17.836 "name": "Nvme$subsystem", 00:20:17.836 "trtype": "$TEST_TRANSPORT", 00:20:17.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.836 "adrfam": "ipv4", 00:20:17.836 "trsvcid": "$NVMF_PORT", 00:20:17.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.836 "hdgst": ${hdgst:-false}, 00:20:17.836 "ddgst": ${ddgst:-false} 00:20:17.836 }, 00:20:17.836 "method": "bdev_nvme_attach_controller" 00:20:17.836 } 00:20:17.836 EOF 00:20:17.836 )") 00:20:17.836 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.836 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.836 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.836 { 00:20:17.836 "params": { 00:20:17.836 "name": "Nvme$subsystem", 00:20:17.836 "trtype": "$TEST_TRANSPORT", 00:20:17.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.836 "adrfam": "ipv4", 00:20:17.836 "trsvcid": "$NVMF_PORT", 00:20:17.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.836 "hdgst": ${hdgst:-false}, 00:20:17.836 "ddgst": ${ddgst:-false} 00:20:17.836 }, 00:20:17.836 "method": "bdev_nvme_attach_controller" 00:20:17.836 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.837 { 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme$subsystem", 00:20:17.837 "trtype": "$TEST_TRANSPORT", 00:20:17.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "$NVMF_PORT", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.837 "hdgst": ${hdgst:-false}, 00:20:17.837 "ddgst": ${ddgst:-false} 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.837 { 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme$subsystem", 00:20:17.837 "trtype": "$TEST_TRANSPORT", 00:20:17.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "$NVMF_PORT", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.837 "hdgst": ${hdgst:-false}, 00:20:17.837 "ddgst": ${ddgst:-false} 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.837 { 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme$subsystem", 00:20:17.837 "trtype": "$TEST_TRANSPORT", 00:20:17.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "$NVMF_PORT", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.837 "hdgst": ${hdgst:-false}, 00:20:17.837 "ddgst": ${ddgst:-false} 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.837 { 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme$subsystem", 00:20:17.837 "trtype": "$TEST_TRANSPORT", 00:20:17.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "$NVMF_PORT", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.837 "hdgst": ${hdgst:-false}, 00:20:17.837 "ddgst": ${ddgst:-false} 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 [2024-05-15 11:06:14.419747] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:17.837 [2024-05-15 11:06:14.419842] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:17.837 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.837 { 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme$subsystem", 00:20:17.837 "trtype": "$TEST_TRANSPORT", 00:20:17.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "$NVMF_PORT", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.837 "hdgst": ${hdgst:-false}, 00:20:17.837 "ddgst": ${ddgst:-false} 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.837 { 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme$subsystem", 00:20:17.837 "trtype": "$TEST_TRANSPORT", 00:20:17.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "$NVMF_PORT", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.837 "hdgst": ${hdgst:-false}, 00:20:17.837 "ddgst": ${ddgst:-false} 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 11:06:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.837 { 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme$subsystem", 00:20:17.837 "trtype": "$TEST_TRANSPORT", 00:20:17.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "$NVMF_PORT", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.837 "hdgst": ${hdgst:-false}, 00:20:17.837 "ddgst": ${ddgst:-false} 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 } 00:20:17.837 EOF 00:20:17.837 )") 00:20:17.837 11:06:14 -- nvmf/common.sh@543 -- # cat 00:20:17.837 11:06:14 -- nvmf/common.sh@545 -- # jq . 00:20:17.837 11:06:14 -- nvmf/common.sh@546 -- # IFS=, 00:20:17.837 11:06:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme1", 00:20:17.837 "trtype": "tcp", 00:20:17.837 "traddr": "10.0.0.2", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "4420", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.837 "hdgst": false, 00:20:17.837 "ddgst": false 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 },{ 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme2", 00:20:17.837 "trtype": "tcp", 00:20:17.837 "traddr": "10.0.0.2", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "4420", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:17.837 "hdgst": false, 00:20:17.837 "ddgst": false 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 },{ 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme3", 00:20:17.837 "trtype": "tcp", 00:20:17.837 "traddr": "10.0.0.2", 00:20:17.837 "adrfam": "ipv4", 00:20:17.837 "trsvcid": "4420", 00:20:17.837 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:17.837 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:17.837 "hdgst": false, 00:20:17.837 "ddgst": false 00:20:17.837 }, 00:20:17.837 "method": "bdev_nvme_attach_controller" 00:20:17.837 },{ 00:20:17.837 "params": { 00:20:17.837 "name": "Nvme4", 00:20:17.837 "trtype": "tcp", 00:20:17.837 "traddr": "10.0.0.2", 00:20:17.837 "adrfam": "ipv4", 00:20:17.838 "trsvcid": "4420", 00:20:17.838 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:17.838 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:17.838 "hdgst": false, 00:20:17.838 "ddgst": false 00:20:17.838 }, 00:20:17.838 "method": "bdev_nvme_attach_controller" 00:20:17.838 },{ 00:20:17.838 "params": { 00:20:17.838 "name": "Nvme5", 00:20:17.838 "trtype": "tcp", 00:20:17.838 "traddr": "10.0.0.2", 00:20:17.838 "adrfam": "ipv4", 00:20:17.838 "trsvcid": "4420", 00:20:17.838 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:17.838 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:17.838 "hdgst": false, 00:20:17.838 "ddgst": false 00:20:17.838 }, 00:20:17.838 "method": "bdev_nvme_attach_controller" 00:20:17.838 },{ 00:20:17.838 "params": { 00:20:17.838 "name": "Nvme6", 00:20:17.838 "trtype": "tcp", 00:20:17.838 "traddr": "10.0.0.2", 00:20:17.838 "adrfam": "ipv4", 00:20:17.838 "trsvcid": "4420", 00:20:17.838 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:17.838 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:17.838 "hdgst": false, 00:20:17.838 "ddgst": false 00:20:17.838 }, 00:20:17.838 "method": "bdev_nvme_attach_controller" 00:20:17.838 },{ 00:20:17.838 "params": { 00:20:17.838 "name": "Nvme7", 00:20:17.838 "trtype": "tcp", 00:20:17.838 "traddr": "10.0.0.2", 00:20:17.838 "adrfam": "ipv4", 00:20:17.838 "trsvcid": "4420", 00:20:17.838 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:17.838 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:17.838 "hdgst": false, 00:20:17.838 "ddgst": false 00:20:17.838 }, 00:20:17.838 "method": "bdev_nvme_attach_controller" 00:20:17.838 },{ 00:20:17.838 "params": { 00:20:17.838 "name": "Nvme8", 00:20:17.838 "trtype": "tcp", 00:20:17.838 "traddr": "10.0.0.2", 00:20:17.838 "adrfam": "ipv4", 00:20:17.838 "trsvcid": "4420", 00:20:17.838 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:17.838 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:17.838 "hdgst": false, 00:20:17.838 "ddgst": false 00:20:17.838 }, 00:20:17.838 "method": "bdev_nvme_attach_controller" 00:20:17.838 },{ 00:20:17.838 "params": { 00:20:17.838 "name": "Nvme9", 00:20:17.838 "trtype": "tcp", 00:20:17.838 "traddr": "10.0.0.2", 00:20:17.838 "adrfam": "ipv4", 00:20:17.838 "trsvcid": "4420", 00:20:17.838 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:17.838 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:17.838 "hdgst": false, 00:20:17.838 "ddgst": false 00:20:17.838 }, 00:20:17.838 "method": "bdev_nvme_attach_controller" 00:20:17.838 },{ 00:20:17.838 "params": { 00:20:17.838 "name": "Nvme10", 00:20:17.838 "trtype": "tcp", 00:20:17.838 "traddr": "10.0.0.2", 00:20:17.838 "adrfam": "ipv4", 00:20:17.838 "trsvcid": "4420", 00:20:17.838 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:17.838 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:17.838 "hdgst": false, 00:20:17.838 "ddgst": false 00:20:17.838 }, 00:20:17.838 "method": "bdev_nvme_attach_controller" 00:20:17.838 }' 00:20:17.838 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.838 [2024-05-15 11:06:14.483170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.098 [2024-05-15 11:06:14.548277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.481 11:06:15 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:19.481 11:06:15 -- common/autotest_common.sh@860 -- # return 0 00:20:19.481 11:06:15 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:19.481 11:06:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.481 11:06:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.481 11:06:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.481 11:06:15 -- target/shutdown.sh@83 -- # kill -9 391505 00:20:19.481 11:06:15 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:19.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 391505 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:19.481 11:06:15 -- target/shutdown.sh@87 -- # sleep 1 00:20:20.421 11:06:16 -- target/shutdown.sh@88 -- # kill -0 391239 00:20:20.421 11:06:16 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:20.421 11:06:16 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:20.421 11:06:16 -- nvmf/common.sh@521 -- # config=() 00:20:20.421 11:06:16 -- nvmf/common.sh@521 -- # local subsystem config 00:20:20.421 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.421 { 00:20:20.421 "params": { 00:20:20.421 "name": "Nvme$subsystem", 00:20:20.421 "trtype": "$TEST_TRANSPORT", 00:20:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.421 "adrfam": "ipv4", 00:20:20.421 "trsvcid": "$NVMF_PORT", 00:20:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.421 "hdgst": ${hdgst:-false}, 00:20:20.421 "ddgst": ${ddgst:-false} 00:20:20.421 }, 00:20:20.421 "method": "bdev_nvme_attach_controller" 00:20:20.421 } 00:20:20.421 EOF 00:20:20.421 )") 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.421 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.421 { 00:20:20.421 "params": { 00:20:20.421 "name": "Nvme$subsystem", 00:20:20.421 "trtype": "$TEST_TRANSPORT", 00:20:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.421 "adrfam": "ipv4", 00:20:20.421 "trsvcid": "$NVMF_PORT", 00:20:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.421 "hdgst": ${hdgst:-false}, 00:20:20.421 "ddgst": ${ddgst:-false} 00:20:20.421 }, 00:20:20.421 "method": "bdev_nvme_attach_controller" 00:20:20.421 } 00:20:20.421 EOF 00:20:20.421 )") 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.421 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.421 { 00:20:20.421 "params": { 00:20:20.421 "name": "Nvme$subsystem", 00:20:20.421 "trtype": "$TEST_TRANSPORT", 00:20:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.421 "adrfam": "ipv4", 00:20:20.421 "trsvcid": "$NVMF_PORT", 00:20:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.421 "hdgst": ${hdgst:-false}, 00:20:20.421 "ddgst": ${ddgst:-false} 00:20:20.421 }, 00:20:20.421 "method": "bdev_nvme_attach_controller" 00:20:20.421 } 00:20:20.421 EOF 00:20:20.421 )") 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.421 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.421 { 00:20:20.421 "params": { 00:20:20.421 "name": "Nvme$subsystem", 00:20:20.421 "trtype": "$TEST_TRANSPORT", 00:20:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.421 "adrfam": "ipv4", 00:20:20.421 "trsvcid": "$NVMF_PORT", 00:20:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.421 "hdgst": ${hdgst:-false}, 00:20:20.421 "ddgst": ${ddgst:-false} 00:20:20.421 }, 00:20:20.421 "method": "bdev_nvme_attach_controller" 00:20:20.421 } 00:20:20.421 EOF 00:20:20.421 )") 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.421 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.421 { 00:20:20.421 "params": { 00:20:20.421 "name": "Nvme$subsystem", 00:20:20.421 "trtype": "$TEST_TRANSPORT", 00:20:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.421 "adrfam": "ipv4", 00:20:20.421 "trsvcid": "$NVMF_PORT", 00:20:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.421 "hdgst": ${hdgst:-false}, 00:20:20.421 "ddgst": ${ddgst:-false} 00:20:20.421 }, 00:20:20.421 "method": "bdev_nvme_attach_controller" 00:20:20.421 } 00:20:20.421 EOF 00:20:20.421 )") 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.421 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.421 { 00:20:20.421 "params": { 00:20:20.421 "name": "Nvme$subsystem", 00:20:20.421 "trtype": "$TEST_TRANSPORT", 00:20:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.421 "adrfam": "ipv4", 00:20:20.421 "trsvcid": "$NVMF_PORT", 00:20:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.421 "hdgst": ${hdgst:-false}, 00:20:20.421 "ddgst": ${ddgst:-false} 00:20:20.421 }, 00:20:20.421 "method": "bdev_nvme_attach_controller" 00:20:20.421 } 00:20:20.421 EOF 00:20:20.421 )") 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.421 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.421 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.421 { 00:20:20.421 "params": { 00:20:20.421 "name": "Nvme$subsystem", 00:20:20.422 "trtype": "$TEST_TRANSPORT", 00:20:20.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "$NVMF_PORT", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.422 "hdgst": ${hdgst:-false}, 00:20:20.422 "ddgst": ${ddgst:-false} 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 } 00:20:20.422 EOF 00:20:20.422 )") 00:20:20.422 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.422 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.422 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.422 { 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme$subsystem", 00:20:20.422 "trtype": "$TEST_TRANSPORT", 00:20:20.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "$NVMF_PORT", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.422 "hdgst": ${hdgst:-false}, 00:20:20.422 "ddgst": ${ddgst:-false} 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 } 00:20:20.422 EOF 00:20:20.422 )") 00:20:20.422 [2024-05-15 11:06:16.966076] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:20.422 [2024-05-15 11:06:16.966145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392191 ] 00:20:20.422 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.422 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.422 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.422 { 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme$subsystem", 00:20:20.422 "trtype": "$TEST_TRANSPORT", 00:20:20.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "$NVMF_PORT", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.422 "hdgst": ${hdgst:-false}, 00:20:20.422 "ddgst": ${ddgst:-false} 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 } 00:20:20.422 EOF 00:20:20.422 )") 00:20:20.422 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.422 11:06:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.422 11:06:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.422 { 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme$subsystem", 00:20:20.422 "trtype": "$TEST_TRANSPORT", 00:20:20.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "$NVMF_PORT", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.422 "hdgst": ${hdgst:-false}, 00:20:20.422 "ddgst": ${ddgst:-false} 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 } 00:20:20.422 EOF 00:20:20.422 )") 00:20:20.422 11:06:16 -- nvmf/common.sh@543 -- # cat 00:20:20.422 11:06:16 -- nvmf/common.sh@545 -- # jq . 00:20:20.422 11:06:16 -- nvmf/common.sh@546 -- # IFS=, 00:20:20.422 11:06:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme1", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme2", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme3", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme4", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme5", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme6", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme7", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme8", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme9", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 },{ 00:20:20.422 "params": { 00:20:20.422 "name": "Nvme10", 00:20:20.422 "trtype": "tcp", 00:20:20.422 "traddr": "10.0.0.2", 00:20:20.422 "adrfam": "ipv4", 00:20:20.422 "trsvcid": "4420", 00:20:20.422 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:20.422 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:20.422 "hdgst": false, 00:20:20.422 "ddgst": false 00:20:20.422 }, 00:20:20.422 "method": "bdev_nvme_attach_controller" 00:20:20.422 }' 00:20:20.422 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.422 [2024-05-15 11:06:17.027916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.683 [2024-05-15 11:06:17.092001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.066 Running I/O for 1 seconds... 00:20:23.449 00:20:23.449 Latency(us) 00:20:23.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.449 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme1n1 : 1.11 230.72 14.42 0.00 0.00 274485.97 17585.49 248162.99 00:20:23.449 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme2n1 : 1.07 242.66 15.17 0.00 0.00 255809.69 20862.29 221948.59 00:20:23.449 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme3n1 : 1.08 237.82 14.86 0.00 0.00 255117.01 17257.81 258648.75 00:20:23.449 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme4n1 : 1.08 237.16 14.82 0.00 0.00 252460.59 14636.37 239424.85 00:20:23.449 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme5n1 : 1.08 236.63 14.79 0.00 0.00 248188.16 36700.16 244667.73 00:20:23.449 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme6n1 : 1.18 271.43 16.96 0.00 0.00 213040.81 9994.24 284863.15 00:20:23.449 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme7n1 : 1.16 274.78 17.17 0.00 0.00 207289.60 11960.32 251658.24 00:20:23.449 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme8n1 : 1.18 270.54 16.91 0.00 0.00 206988.46 17257.81 248162.99 00:20:23.449 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme9n1 : 1.17 219.41 13.71 0.00 0.00 250113.71 18786.99 272629.76 00:20:23.449 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.449 Verification LBA range: start 0x0 length 0x400 00:20:23.449 Nvme10n1 : 1.19 269.54 16.85 0.00 0.00 200237.82 12561.07 249910.61 00:20:23.449 =================================================================================================================== 00:20:23.450 Total : 2490.69 155.67 0.00 0.00 233724.19 9994.24 284863.15 00:20:23.450 11:06:19 -- target/shutdown.sh@94 -- # stoptarget 00:20:23.450 11:06:19 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:23.450 11:06:19 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:23.450 11:06:19 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:23.450 11:06:19 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:23.450 11:06:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:23.450 11:06:19 -- nvmf/common.sh@117 -- # sync 00:20:23.450 11:06:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.450 11:06:19 -- nvmf/common.sh@120 -- # set +e 00:20:23.450 11:06:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.450 11:06:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.450 rmmod nvme_tcp 00:20:23.450 rmmod nvme_fabrics 00:20:23.450 rmmod nvme_keyring 00:20:23.450 11:06:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.450 11:06:19 -- nvmf/common.sh@124 -- # set -e 00:20:23.450 11:06:19 -- nvmf/common.sh@125 -- # return 0 00:20:23.450 11:06:19 -- nvmf/common.sh@478 -- # '[' -n 391239 ']' 00:20:23.450 11:06:19 -- nvmf/common.sh@479 -- # killprocess 391239 00:20:23.450 11:06:19 -- common/autotest_common.sh@946 -- # '[' -z 391239 ']' 00:20:23.450 11:06:19 -- common/autotest_common.sh@950 -- # kill -0 391239 00:20:23.450 11:06:19 -- common/autotest_common.sh@951 -- # uname 00:20:23.450 11:06:19 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:23.450 11:06:19 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 391239 00:20:23.450 11:06:19 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:23.450 11:06:19 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:23.450 11:06:19 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 391239' 00:20:23.450 killing process with pid 391239 00:20:23.450 11:06:19 -- common/autotest_common.sh@965 -- # kill 391239 00:20:23.450 [2024-05-15 11:06:19.978135] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:23.450 11:06:19 -- common/autotest_common.sh@970 -- # wait 391239 00:20:23.711 11:06:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:23.711 11:06:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:23.711 11:06:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:23.711 11:06:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.711 11:06:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.711 11:06:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.711 11:06:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.711 11:06:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.621 11:06:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.882 00:20:25.882 real 0m16.521s 00:20:25.882 user 0m34.161s 00:20:25.882 sys 0m6.469s 00:20:25.882 11:06:22 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:25.882 11:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:25.882 ************************************ 00:20:25.882 END TEST nvmf_shutdown_tc1 00:20:25.882 ************************************ 00:20:25.882 11:06:22 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:25.882 11:06:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:25.882 11:06:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:25.882 11:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:25.882 ************************************ 00:20:25.882 START TEST nvmf_shutdown_tc2 00:20:25.882 ************************************ 00:20:25.882 11:06:22 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:20:25.882 11:06:22 -- target/shutdown.sh@99 -- # starttarget 00:20:25.882 11:06:22 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:25.882 11:06:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:25.882 11:06:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.882 11:06:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:25.882 11:06:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:25.882 11:06:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:25.882 11:06:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.882 11:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.882 11:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.882 11:06:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:25.882 11:06:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.882 11:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:25.882 11:06:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:25.882 11:06:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.882 11:06:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.882 11:06:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.882 11:06:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.882 11:06:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.882 11:06:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.882 11:06:22 -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.882 11:06:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.882 11:06:22 -- nvmf/common.sh@296 -- # e810=() 00:20:25.882 11:06:22 -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.882 11:06:22 -- nvmf/common.sh@297 -- # x722=() 00:20:25.882 11:06:22 -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.882 11:06:22 -- nvmf/common.sh@298 -- # mlx=() 00:20:25.882 11:06:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.882 11:06:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.882 11:06:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.882 11:06:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.882 11:06:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.882 11:06:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.882 11:06:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:25.882 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:25.882 11:06:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.882 11:06:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:25.882 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:25.882 11:06:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.882 11:06:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.882 11:06:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.882 11:06:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:25.882 11:06:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.882 11:06:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:25.882 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:25.882 11:06:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.882 11:06:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.882 11:06:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.882 11:06:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:25.882 11:06:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.882 11:06:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:25.882 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:25.882 11:06:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.882 11:06:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:25.882 11:06:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:25.882 11:06:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:25.882 11:06:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:25.882 11:06:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.882 11:06:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.882 11:06:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.882 11:06:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:25.882 11:06:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.882 11:06:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.882 11:06:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:25.882 11:06:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.882 11:06:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.882 11:06:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:25.882 11:06:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:25.882 11:06:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.882 11:06:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.882 11:06:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.882 11:06:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.143 11:06:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.143 11:06:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.143 11:06:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.143 11:06:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.143 11:06:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:20:26.143 00:20:26.143 --- 10.0.0.2 ping statistics --- 00:20:26.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.143 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:20:26.143 11:06:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:20:26.143 00:20:26.143 --- 10.0.0.1 ping statistics --- 00:20:26.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.143 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:20:26.143 11:06:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.143 11:06:22 -- nvmf/common.sh@411 -- # return 0 00:20:26.143 11:06:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:26.143 11:06:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.143 11:06:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:26.143 11:06:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:26.143 11:06:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.143 11:06:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:26.143 11:06:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:26.143 11:06:22 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:26.143 11:06:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:26.143 11:06:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:26.143 11:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:26.143 11:06:22 -- nvmf/common.sh@470 -- # nvmfpid=393309 00:20:26.143 11:06:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.143 11:06:22 -- nvmf/common.sh@471 -- # waitforlisten 393309 00:20:26.143 11:06:22 -- common/autotest_common.sh@827 -- # '[' -z 393309 ']' 00:20:26.143 11:06:22 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.143 11:06:22 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:26.143 11:06:22 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.143 11:06:22 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:26.143 11:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:26.143 [2024-05-15 11:06:22.795845] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:26.143 [2024-05-15 11:06:22.795901] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.405 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.405 [2024-05-15 11:06:22.877883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.405 [2024-05-15 11:06:22.932223] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.405 [2024-05-15 11:06:22.932252] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.405 [2024-05-15 11:06:22.932258] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.405 [2024-05-15 11:06:22.932265] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.405 [2024-05-15 11:06:22.932269] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.405 [2024-05-15 11:06:22.932368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.405 [2024-05-15 11:06:22.932528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.405 [2024-05-15 11:06:22.932684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.405 [2024-05-15 11:06:22.932683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.973 11:06:23 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:26.973 11:06:23 -- common/autotest_common.sh@860 -- # return 0 00:20:26.973 11:06:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:26.973 11:06:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.973 11:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:26.973 11:06:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.973 11:06:23 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.973 11:06:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.973 11:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:26.973 [2024-05-15 11:06:23.610700] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.973 11:06:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.973 11:06:23 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:26.973 11:06:23 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:26.973 11:06:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:26.973 11:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:26.973 11:06:23 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:27.232 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.233 11:06:23 -- target/shutdown.sh@28 -- # cat 00:20:27.233 11:06:23 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:27.233 11:06:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.233 11:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:27.233 Malloc1 00:20:27.233 [2024-05-15 11:06:23.705287] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:27.233 [2024-05-15 11:06:23.705475] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.233 Malloc2 00:20:27.233 Malloc3 00:20:27.233 Malloc4 00:20:27.233 Malloc5 00:20:27.233 Malloc6 00:20:27.493 Malloc7 00:20:27.493 Malloc8 00:20:27.493 Malloc9 00:20:27.493 Malloc10 00:20:27.493 11:06:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.493 11:06:24 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:27.493 11:06:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.493 11:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:27.493 11:06:24 -- target/shutdown.sh@103 -- # perfpid=393691 00:20:27.493 11:06:24 -- target/shutdown.sh@104 -- # waitforlisten 393691 /var/tmp/bdevperf.sock 00:20:27.493 11:06:24 -- common/autotest_common.sh@827 -- # '[' -z 393691 ']' 00:20:27.493 11:06:24 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.493 11:06:24 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:27.493 11:06:24 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.493 11:06:24 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:27.493 11:06:24 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:27.493 11:06:24 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:27.493 11:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:27.493 11:06:24 -- nvmf/common.sh@521 -- # config=() 00:20:27.493 11:06:24 -- nvmf/common.sh@521 -- # local subsystem config 00:20:27.493 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.493 { 00:20:27.493 "params": { 00:20:27.493 "name": "Nvme$subsystem", 00:20:27.493 "trtype": "$TEST_TRANSPORT", 00:20:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.493 "adrfam": "ipv4", 00:20:27.493 "trsvcid": "$NVMF_PORT", 00:20:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.493 "hdgst": ${hdgst:-false}, 00:20:27.493 "ddgst": ${ddgst:-false} 00:20:27.493 }, 00:20:27.493 "method": "bdev_nvme_attach_controller" 00:20:27.493 } 00:20:27.493 EOF 00:20:27.493 )") 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.493 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.493 { 00:20:27.493 "params": { 00:20:27.493 "name": "Nvme$subsystem", 00:20:27.493 "trtype": "$TEST_TRANSPORT", 00:20:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.493 "adrfam": "ipv4", 00:20:27.493 "trsvcid": "$NVMF_PORT", 00:20:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.493 "hdgst": ${hdgst:-false}, 00:20:27.493 "ddgst": ${ddgst:-false} 00:20:27.493 }, 00:20:27.493 "method": "bdev_nvme_attach_controller" 00:20:27.493 } 00:20:27.493 EOF 00:20:27.493 )") 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.493 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.493 { 00:20:27.493 "params": { 00:20:27.493 "name": "Nvme$subsystem", 00:20:27.493 "trtype": "$TEST_TRANSPORT", 00:20:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.493 "adrfam": "ipv4", 00:20:27.493 "trsvcid": "$NVMF_PORT", 00:20:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.493 "hdgst": ${hdgst:-false}, 00:20:27.493 "ddgst": ${ddgst:-false} 00:20:27.493 }, 00:20:27.493 "method": "bdev_nvme_attach_controller" 00:20:27.493 } 00:20:27.493 EOF 00:20:27.493 )") 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.493 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.493 { 00:20:27.493 "params": { 00:20:27.493 "name": "Nvme$subsystem", 00:20:27.493 "trtype": "$TEST_TRANSPORT", 00:20:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.493 "adrfam": "ipv4", 00:20:27.493 "trsvcid": "$NVMF_PORT", 00:20:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.493 "hdgst": ${hdgst:-false}, 00:20:27.493 "ddgst": ${ddgst:-false} 00:20:27.493 }, 00:20:27.493 "method": "bdev_nvme_attach_controller" 00:20:27.493 } 00:20:27.493 EOF 00:20:27.493 )") 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.493 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.493 { 00:20:27.493 "params": { 00:20:27.493 "name": "Nvme$subsystem", 00:20:27.493 "trtype": "$TEST_TRANSPORT", 00:20:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.493 "adrfam": "ipv4", 00:20:27.493 "trsvcid": "$NVMF_PORT", 00:20:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.493 "hdgst": ${hdgst:-false}, 00:20:27.493 "ddgst": ${ddgst:-false} 00:20:27.493 }, 00:20:27.493 "method": "bdev_nvme_attach_controller" 00:20:27.493 } 00:20:27.493 EOF 00:20:27.493 )") 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.493 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.493 { 00:20:27.493 "params": { 00:20:27.493 "name": "Nvme$subsystem", 00:20:27.493 "trtype": "$TEST_TRANSPORT", 00:20:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.493 "adrfam": "ipv4", 00:20:27.493 "trsvcid": "$NVMF_PORT", 00:20:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.493 "hdgst": ${hdgst:-false}, 00:20:27.493 "ddgst": ${ddgst:-false} 00:20:27.493 }, 00:20:27.493 "method": "bdev_nvme_attach_controller" 00:20:27.493 } 00:20:27.493 EOF 00:20:27.493 )") 00:20:27.493 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.493 [2024-05-15 11:06:24.141292] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:27.494 [2024-05-15 11:06:24.141345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393691 ] 00:20:27.494 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.494 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.494 { 00:20:27.494 "params": { 00:20:27.494 "name": "Nvme$subsystem", 00:20:27.494 "trtype": "$TEST_TRANSPORT", 00:20:27.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.494 "adrfam": "ipv4", 00:20:27.494 "trsvcid": "$NVMF_PORT", 00:20:27.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.494 "hdgst": ${hdgst:-false}, 00:20:27.494 "ddgst": ${ddgst:-false} 00:20:27.494 }, 00:20:27.494 "method": "bdev_nvme_attach_controller" 00:20:27.494 } 00:20:27.494 EOF 00:20:27.494 )") 00:20:27.754 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.754 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.754 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.754 { 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme$subsystem", 00:20:27.754 "trtype": "$TEST_TRANSPORT", 00:20:27.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "$NVMF_PORT", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.754 "hdgst": ${hdgst:-false}, 00:20:27.754 "ddgst": ${ddgst:-false} 00:20:27.754 }, 00:20:27.754 "method": "bdev_nvme_attach_controller" 00:20:27.754 } 00:20:27.754 EOF 00:20:27.754 )") 00:20:27.754 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.754 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.754 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.754 { 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme$subsystem", 00:20:27.754 "trtype": "$TEST_TRANSPORT", 00:20:27.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "$NVMF_PORT", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.754 "hdgst": ${hdgst:-false}, 00:20:27.754 "ddgst": ${ddgst:-false} 00:20:27.754 }, 00:20:27.754 "method": "bdev_nvme_attach_controller" 00:20:27.754 } 00:20:27.754 EOF 00:20:27.754 )") 00:20:27.754 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.754 11:06:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.754 11:06:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.754 { 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme$subsystem", 00:20:27.754 "trtype": "$TEST_TRANSPORT", 00:20:27.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "$NVMF_PORT", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.754 "hdgst": ${hdgst:-false}, 00:20:27.754 "ddgst": ${ddgst:-false} 00:20:27.754 }, 00:20:27.754 "method": "bdev_nvme_attach_controller" 00:20:27.754 } 00:20:27.754 EOF 00:20:27.754 )") 00:20:27.754 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.754 11:06:24 -- nvmf/common.sh@543 -- # cat 00:20:27.754 11:06:24 -- nvmf/common.sh@545 -- # jq . 00:20:27.754 11:06:24 -- nvmf/common.sh@546 -- # IFS=, 00:20:27.754 11:06:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme1", 00:20:27.754 "trtype": "tcp", 00:20:27.754 "traddr": "10.0.0.2", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "4420", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.754 "hdgst": false, 00:20:27.754 "ddgst": false 00:20:27.754 }, 00:20:27.754 "method": "bdev_nvme_attach_controller" 00:20:27.754 },{ 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme2", 00:20:27.754 "trtype": "tcp", 00:20:27.754 "traddr": "10.0.0.2", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "4420", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:27.754 "hdgst": false, 00:20:27.754 "ddgst": false 00:20:27.754 }, 00:20:27.754 "method": "bdev_nvme_attach_controller" 00:20:27.754 },{ 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme3", 00:20:27.754 "trtype": "tcp", 00:20:27.754 "traddr": "10.0.0.2", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "4420", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:27.754 "hdgst": false, 00:20:27.754 "ddgst": false 00:20:27.754 }, 00:20:27.754 "method": "bdev_nvme_attach_controller" 00:20:27.754 },{ 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme4", 00:20:27.754 "trtype": "tcp", 00:20:27.754 "traddr": "10.0.0.2", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "4420", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:27.754 "hdgst": false, 00:20:27.754 "ddgst": false 00:20:27.754 }, 00:20:27.754 "method": "bdev_nvme_attach_controller" 00:20:27.754 },{ 00:20:27.754 "params": { 00:20:27.754 "name": "Nvme5", 00:20:27.754 "trtype": "tcp", 00:20:27.754 "traddr": "10.0.0.2", 00:20:27.754 "adrfam": "ipv4", 00:20:27.754 "trsvcid": "4420", 00:20:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:27.754 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:27.754 "hdgst": false, 00:20:27.754 "ddgst": false 00:20:27.755 }, 00:20:27.755 "method": "bdev_nvme_attach_controller" 00:20:27.755 },{ 00:20:27.755 "params": { 00:20:27.755 "name": "Nvme6", 00:20:27.755 "trtype": "tcp", 00:20:27.755 "traddr": "10.0.0.2", 00:20:27.755 "adrfam": "ipv4", 00:20:27.755 "trsvcid": "4420", 00:20:27.755 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:27.755 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:27.755 "hdgst": false, 00:20:27.755 "ddgst": false 00:20:27.755 }, 00:20:27.755 "method": "bdev_nvme_attach_controller" 00:20:27.755 },{ 00:20:27.755 "params": { 00:20:27.755 "name": "Nvme7", 00:20:27.755 "trtype": "tcp", 00:20:27.755 "traddr": "10.0.0.2", 00:20:27.755 "adrfam": "ipv4", 00:20:27.755 "trsvcid": "4420", 00:20:27.755 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:27.755 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:27.755 "hdgst": false, 00:20:27.755 "ddgst": false 00:20:27.755 }, 00:20:27.755 "method": "bdev_nvme_attach_controller" 00:20:27.755 },{ 00:20:27.755 "params": { 00:20:27.755 "name": "Nvme8", 00:20:27.755 "trtype": "tcp", 00:20:27.755 "traddr": "10.0.0.2", 00:20:27.755 "adrfam": "ipv4", 00:20:27.755 "trsvcid": "4420", 00:20:27.755 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:27.755 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:27.755 "hdgst": false, 00:20:27.755 "ddgst": false 00:20:27.755 }, 00:20:27.755 "method": "bdev_nvme_attach_controller" 00:20:27.755 },{ 00:20:27.755 "params": { 00:20:27.755 "name": "Nvme9", 00:20:27.755 "trtype": "tcp", 00:20:27.755 "traddr": "10.0.0.2", 00:20:27.755 "adrfam": "ipv4", 00:20:27.755 "trsvcid": "4420", 00:20:27.755 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:27.755 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:27.755 "hdgst": false, 00:20:27.755 "ddgst": false 00:20:27.755 }, 00:20:27.755 "method": "bdev_nvme_attach_controller" 00:20:27.755 },{ 00:20:27.755 "params": { 00:20:27.755 "name": "Nvme10", 00:20:27.755 "trtype": "tcp", 00:20:27.755 "traddr": "10.0.0.2", 00:20:27.755 "adrfam": "ipv4", 00:20:27.755 "trsvcid": "4420", 00:20:27.755 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:27.755 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:27.755 "hdgst": false, 00:20:27.755 "ddgst": false 00:20:27.755 }, 00:20:27.755 "method": "bdev_nvme_attach_controller" 00:20:27.755 }' 00:20:27.755 [2024-05-15 11:06:24.200795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.755 [2024-05-15 11:06:24.269076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.137 Running I/O for 10 seconds... 00:20:29.137 11:06:25 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.137 11:06:25 -- common/autotest_common.sh@860 -- # return 0 00:20:29.138 11:06:25 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:29.138 11:06:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.138 11:06:25 -- common/autotest_common.sh@10 -- # set +x 00:20:29.397 11:06:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.397 11:06:25 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:29.397 11:06:25 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:29.397 11:06:25 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:29.397 11:06:25 -- target/shutdown.sh@57 -- # local ret=1 00:20:29.397 11:06:25 -- target/shutdown.sh@58 -- # local i 00:20:29.397 11:06:25 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:29.397 11:06:25 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:29.397 11:06:25 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.397 11:06:25 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.397 11:06:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.397 11:06:25 -- common/autotest_common.sh@10 -- # set +x 00:20:29.397 11:06:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.397 11:06:25 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:29.397 11:06:25 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:29.397 11:06:25 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:29.657 11:06:26 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:29.657 11:06:26 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:29.657 11:06:26 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.657 11:06:26 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.657 11:06:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.657 11:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:29.657 11:06:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.657 11:06:26 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:29.657 11:06:26 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:29.657 11:06:26 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:29.917 11:06:26 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:29.917 11:06:26 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:29.917 11:06:26 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.917 11:06:26 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.917 11:06:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.917 11:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:29.917 11:06:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.917 11:06:26 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:29.917 11:06:26 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:29.917 11:06:26 -- target/shutdown.sh@64 -- # ret=0 00:20:29.917 11:06:26 -- target/shutdown.sh@65 -- # break 00:20:29.917 11:06:26 -- target/shutdown.sh@69 -- # return 0 00:20:29.917 11:06:26 -- target/shutdown.sh@110 -- # killprocess 393691 00:20:29.917 11:06:26 -- common/autotest_common.sh@946 -- # '[' -z 393691 ']' 00:20:29.917 11:06:26 -- common/autotest_common.sh@950 -- # kill -0 393691 00:20:29.917 11:06:26 -- common/autotest_common.sh@951 -- # uname 00:20:29.917 11:06:26 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:29.917 11:06:26 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 393691 00:20:30.177 11:06:26 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:30.177 11:06:26 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:30.177 11:06:26 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 393691' 00:20:30.177 killing process with pid 393691 00:20:30.177 11:06:26 -- common/autotest_common.sh@965 -- # kill 393691 00:20:30.177 11:06:26 -- common/autotest_common.sh@970 -- # wait 393691 00:20:30.177 Received shutdown signal, test time was about 0.977241 seconds 00:20:30.177 00:20:30.177 Latency(us) 00:20:30.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.177 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.177 Verification LBA range: start 0x0 length 0x400 00:20:30.177 Nvme1n1 : 0.97 264.71 16.54 0.00 0.00 238818.77 16602.45 249910.61 00:20:30.177 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.177 Verification LBA range: start 0x0 length 0x400 00:20:30.177 Nvme2n1 : 0.97 263.94 16.50 0.00 0.00 234934.40 14636.37 249910.61 00:20:30.177 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.177 Verification LBA range: start 0x0 length 0x400 00:20:30.177 Nvme3n1 : 0.96 266.36 16.65 0.00 0.00 227930.88 32112.64 220200.96 00:20:30.177 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.177 Verification LBA range: start 0x0 length 0x400 00:20:30.178 Nvme4n1 : 0.96 265.37 16.59 0.00 0.00 223920.64 40850.77 241172.48 00:20:30.178 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.178 Verification LBA range: start 0x0 length 0x400 00:20:30.178 Nvme5n1 : 0.93 205.55 12.85 0.00 0.00 282442.24 21626.88 269134.51 00:20:30.178 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.178 Verification LBA range: start 0x0 length 0x400 00:20:30.178 Nvme6n1 : 0.94 203.35 12.71 0.00 0.00 279593.53 27525.12 251658.24 00:20:30.178 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.178 Verification LBA range: start 0x0 length 0x400 00:20:30.178 Nvme7n1 : 0.98 260.15 16.26 0.00 0.00 214211.77 17148.59 251658.24 00:20:30.178 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.178 Verification LBA range: start 0x0 length 0x400 00:20:30.178 Nvme8n1 : 0.96 267.11 16.69 0.00 0.00 203609.17 11250.35 249910.61 00:20:30.178 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.178 Verification LBA range: start 0x0 length 0x400 00:20:30.178 Nvme9n1 : 0.95 202.15 12.63 0.00 0.00 262388.05 14964.05 253405.87 00:20:30.178 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.178 Verification LBA range: start 0x0 length 0x400 00:20:30.178 Nvme10n1 : 0.95 201.22 12.58 0.00 0.00 257562.17 20206.93 270882.13 00:20:30.178 =================================================================================================================== 00:20:30.178 Total : 2399.91 149.99 0.00 0.00 239456.93 11250.35 270882.13 00:20:30.178 11:06:26 -- target/shutdown.sh@113 -- # sleep 1 00:20:31.559 11:06:27 -- target/shutdown.sh@114 -- # kill -0 393309 00:20:31.559 11:06:27 -- target/shutdown.sh@116 -- # stoptarget 00:20:31.559 11:06:27 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:31.559 11:06:27 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:31.559 11:06:27 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:31.559 11:06:27 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:31.559 11:06:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:31.559 11:06:27 -- nvmf/common.sh@117 -- # sync 00:20:31.559 11:06:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.559 11:06:27 -- nvmf/common.sh@120 -- # set +e 00:20:31.559 11:06:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.559 11:06:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.559 rmmod nvme_tcp 00:20:31.559 rmmod nvme_fabrics 00:20:31.559 rmmod nvme_keyring 00:20:31.559 11:06:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.559 11:06:27 -- nvmf/common.sh@124 -- # set -e 00:20:31.559 11:06:27 -- nvmf/common.sh@125 -- # return 0 00:20:31.559 11:06:27 -- nvmf/common.sh@478 -- # '[' -n 393309 ']' 00:20:31.559 11:06:27 -- nvmf/common.sh@479 -- # killprocess 393309 00:20:31.559 11:06:27 -- common/autotest_common.sh@946 -- # '[' -z 393309 ']' 00:20:31.559 11:06:27 -- common/autotest_common.sh@950 -- # kill -0 393309 00:20:31.559 11:06:27 -- common/autotest_common.sh@951 -- # uname 00:20:31.559 11:06:27 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:31.559 11:06:27 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 393309 00:20:31.559 11:06:27 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:31.559 11:06:27 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:31.559 11:06:27 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 393309' 00:20:31.559 killing process with pid 393309 00:20:31.559 11:06:27 -- common/autotest_common.sh@965 -- # kill 393309 00:20:31.559 [2024-05-15 11:06:27.962322] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:31.559 11:06:27 -- common/autotest_common.sh@970 -- # wait 393309 00:20:31.559 11:06:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:31.559 11:06:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:31.559 11:06:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:31.559 11:06:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.559 11:06:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.559 11:06:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.559 11:06:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.559 11:06:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.106 11:06:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.106 00:20:34.106 real 0m7.892s 00:20:34.106 user 0m23.884s 00:20:34.106 sys 0m1.200s 00:20:34.106 11:06:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:34.106 11:06:30 -- common/autotest_common.sh@10 -- # set +x 00:20:34.106 ************************************ 00:20:34.106 END TEST nvmf_shutdown_tc2 00:20:34.106 ************************************ 00:20:34.106 11:06:30 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:34.106 11:06:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:34.106 11:06:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:34.106 11:06:30 -- common/autotest_common.sh@10 -- # set +x 00:20:34.106 ************************************ 00:20:34.106 START TEST nvmf_shutdown_tc3 00:20:34.106 ************************************ 00:20:34.106 11:06:30 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:20:34.106 11:06:30 -- target/shutdown.sh@121 -- # starttarget 00:20:34.106 11:06:30 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:34.106 11:06:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:34.106 11:06:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.106 11:06:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:34.106 11:06:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:34.106 11:06:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:34.106 11:06:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.106 11:06:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.106 11:06:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.106 11:06:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:34.106 11:06:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:34.106 11:06:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.106 11:06:30 -- common/autotest_common.sh@10 -- # set +x 00:20:34.106 11:06:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.106 11:06:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.106 11:06:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.106 11:06:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.106 11:06:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.106 11:06:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.106 11:06:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.106 11:06:30 -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.106 11:06:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.107 11:06:30 -- nvmf/common.sh@296 -- # e810=() 00:20:34.107 11:06:30 -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.107 11:06:30 -- nvmf/common.sh@297 -- # x722=() 00:20:34.107 11:06:30 -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.107 11:06:30 -- nvmf/common.sh@298 -- # mlx=() 00:20:34.107 11:06:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.107 11:06:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.107 11:06:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.107 11:06:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.107 11:06:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.107 11:06:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.107 11:06:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:34.107 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:34.107 11:06:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.107 11:06:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:34.107 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:34.107 11:06:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.107 11:06:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.107 11:06:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.107 11:06:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.107 11:06:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.107 11:06:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:34.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:34.107 11:06:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.107 11:06:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.107 11:06:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.107 11:06:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.107 11:06:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.107 11:06:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:34.107 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:34.107 11:06:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.107 11:06:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:34.107 11:06:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:34.107 11:06:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:34.107 11:06:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.107 11:06:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.107 11:06:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.107 11:06:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.107 11:06:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.107 11:06:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.107 11:06:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.107 11:06:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.107 11:06:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.107 11:06:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.107 11:06:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.107 11:06:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.107 11:06:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.107 11:06:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.107 11:06:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.107 11:06:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.107 11:06:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.107 11:06:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.107 11:06:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.107 11:06:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:20:34.107 00:20:34.107 --- 10.0.0.2 ping statistics --- 00:20:34.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.107 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:20:34.107 11:06:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:20:34.107 00:20:34.107 --- 10.0.0.1 ping statistics --- 00:20:34.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.107 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:34.107 11:06:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.107 11:06:30 -- nvmf/common.sh@411 -- # return 0 00:20:34.107 11:06:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.107 11:06:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.107 11:06:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.107 11:06:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.107 11:06:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.107 11:06:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.107 11:06:30 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:34.107 11:06:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.107 11:06:30 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:34.107 11:06:30 -- common/autotest_common.sh@10 -- # set +x 00:20:34.107 11:06:30 -- nvmf/common.sh@470 -- # nvmfpid=395136 00:20:34.107 11:06:30 -- nvmf/common.sh@471 -- # waitforlisten 395136 00:20:34.107 11:06:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:34.107 11:06:30 -- common/autotest_common.sh@827 -- # '[' -z 395136 ']' 00:20:34.107 11:06:30 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.107 11:06:30 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:34.107 11:06:30 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.107 11:06:30 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:34.107 11:06:30 -- common/autotest_common.sh@10 -- # set +x 00:20:34.368 [2024-05-15 11:06:30.783469] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:34.368 [2024-05-15 11:06:30.783536] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.368 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.368 [2024-05-15 11:06:30.872034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.368 [2024-05-15 11:06:30.939333] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.368 [2024-05-15 11:06:30.939366] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.368 [2024-05-15 11:06:30.939372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.368 [2024-05-15 11:06:30.939376] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.368 [2024-05-15 11:06:30.939381] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.368 [2024-05-15 11:06:30.939506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.368 [2024-05-15 11:06:30.939666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.368 [2024-05-15 11:06:30.939896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.368 [2024-05-15 11:06:30.939897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:34.940 11:06:31 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:34.940 11:06:31 -- common/autotest_common.sh@860 -- # return 0 00:20:34.940 11:06:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:34.940 11:06:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.940 11:06:31 -- common/autotest_common.sh@10 -- # set +x 00:20:35.201 11:06:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.201 11:06:31 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.201 11:06:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.201 11:06:31 -- common/autotest_common.sh@10 -- # set +x 00:20:35.201 [2024-05-15 11:06:31.607689] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.201 11:06:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.201 11:06:31 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:35.201 11:06:31 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:35.201 11:06:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:35.202 11:06:31 -- common/autotest_common.sh@10 -- # set +x 00:20:35.202 11:06:31 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.202 11:06:31 -- target/shutdown.sh@28 -- # cat 00:20:35.202 11:06:31 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:35.202 11:06:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.202 11:06:31 -- common/autotest_common.sh@10 -- # set +x 00:20:35.202 Malloc1 00:20:35.202 [2024-05-15 11:06:31.706202] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:35.202 [2024-05-15 11:06:31.706390] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.202 Malloc2 00:20:35.202 Malloc3 00:20:35.202 Malloc4 00:20:35.202 Malloc5 00:20:35.462 Malloc6 00:20:35.462 Malloc7 00:20:35.462 Malloc8 00:20:35.462 Malloc9 00:20:35.462 Malloc10 00:20:35.462 11:06:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.462 11:06:32 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:35.462 11:06:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.463 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:20:35.463 11:06:32 -- target/shutdown.sh@125 -- # perfpid=395344 00:20:35.463 11:06:32 -- target/shutdown.sh@126 -- # waitforlisten 395344 /var/tmp/bdevperf.sock 00:20:35.463 11:06:32 -- common/autotest_common.sh@827 -- # '[' -z 395344 ']' 00:20:35.463 11:06:32 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.463 11:06:32 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.463 11:06:32 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.463 11:06:32 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:35.463 11:06:32 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.463 11:06:32 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:35.463 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:20:35.463 11:06:32 -- nvmf/common.sh@521 -- # config=() 00:20:35.463 11:06:32 -- nvmf/common.sh@521 -- # local subsystem config 00:20:35.463 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.463 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.463 { 00:20:35.463 "params": { 00:20:35.463 "name": "Nvme$subsystem", 00:20:35.463 "trtype": "$TEST_TRANSPORT", 00:20:35.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.463 "adrfam": "ipv4", 00:20:35.463 "trsvcid": "$NVMF_PORT", 00:20:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.463 "hdgst": ${hdgst:-false}, 00:20:35.463 "ddgst": ${ddgst:-false} 00:20:35.463 }, 00:20:35.463 "method": "bdev_nvme_attach_controller" 00:20:35.463 } 00:20:35.463 EOF 00:20:35.463 )") 00:20:35.463 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.463 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.463 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.463 { 00:20:35.463 "params": { 00:20:35.463 "name": "Nvme$subsystem", 00:20:35.463 "trtype": "$TEST_TRANSPORT", 00:20:35.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.463 "adrfam": "ipv4", 00:20:35.463 "trsvcid": "$NVMF_PORT", 00:20:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.463 "hdgst": ${hdgst:-false}, 00:20:35.463 "ddgst": ${ddgst:-false} 00:20:35.463 }, 00:20:35.463 "method": "bdev_nvme_attach_controller" 00:20:35.463 } 00:20:35.463 EOF 00:20:35.463 )") 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.724 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.724 { 00:20:35.724 "params": { 00:20:35.724 "name": "Nvme$subsystem", 00:20:35.724 "trtype": "$TEST_TRANSPORT", 00:20:35.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.724 "adrfam": "ipv4", 00:20:35.724 "trsvcid": "$NVMF_PORT", 00:20:35.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.724 "hdgst": ${hdgst:-false}, 00:20:35.724 "ddgst": ${ddgst:-false} 00:20:35.724 }, 00:20:35.724 "method": "bdev_nvme_attach_controller" 00:20:35.724 } 00:20:35.724 EOF 00:20:35.724 )") 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.724 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.724 { 00:20:35.724 "params": { 00:20:35.724 "name": "Nvme$subsystem", 00:20:35.724 "trtype": "$TEST_TRANSPORT", 00:20:35.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.724 "adrfam": "ipv4", 00:20:35.724 "trsvcid": "$NVMF_PORT", 00:20:35.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.724 "hdgst": ${hdgst:-false}, 00:20:35.724 "ddgst": ${ddgst:-false} 00:20:35.724 }, 00:20:35.724 "method": "bdev_nvme_attach_controller" 00:20:35.724 } 00:20:35.724 EOF 00:20:35.724 )") 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.724 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.724 { 00:20:35.724 "params": { 00:20:35.724 "name": "Nvme$subsystem", 00:20:35.724 "trtype": "$TEST_TRANSPORT", 00:20:35.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.724 "adrfam": "ipv4", 00:20:35.724 "trsvcid": "$NVMF_PORT", 00:20:35.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.724 "hdgst": ${hdgst:-false}, 00:20:35.724 "ddgst": ${ddgst:-false} 00:20:35.724 }, 00:20:35.724 "method": "bdev_nvme_attach_controller" 00:20:35.724 } 00:20:35.724 EOF 00:20:35.724 )") 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.724 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.724 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.724 { 00:20:35.724 "params": { 00:20:35.724 "name": "Nvme$subsystem", 00:20:35.724 "trtype": "$TEST_TRANSPORT", 00:20:35.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.724 "adrfam": "ipv4", 00:20:35.724 "trsvcid": "$NVMF_PORT", 00:20:35.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.725 "hdgst": ${hdgst:-false}, 00:20:35.725 "ddgst": ${ddgst:-false} 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 } 00:20:35.725 EOF 00:20:35.725 )") 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.725 [2024-05-15 11:06:32.149033] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:35.725 [2024-05-15 11:06:32.149085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395344 ] 00:20:35.725 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.725 { 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme$subsystem", 00:20:35.725 "trtype": "$TEST_TRANSPORT", 00:20:35.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "$NVMF_PORT", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.725 "hdgst": ${hdgst:-false}, 00:20:35.725 "ddgst": ${ddgst:-false} 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 } 00:20:35.725 EOF 00:20:35.725 )") 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.725 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.725 { 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme$subsystem", 00:20:35.725 "trtype": "$TEST_TRANSPORT", 00:20:35.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "$NVMF_PORT", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.725 "hdgst": ${hdgst:-false}, 00:20:35.725 "ddgst": ${ddgst:-false} 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 } 00:20:35.725 EOF 00:20:35.725 )") 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.725 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.725 { 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme$subsystem", 00:20:35.725 "trtype": "$TEST_TRANSPORT", 00:20:35.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "$NVMF_PORT", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.725 "hdgst": ${hdgst:-false}, 00:20:35.725 "ddgst": ${ddgst:-false} 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 } 00:20:35.725 EOF 00:20:35.725 )") 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.725 11:06:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.725 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.725 { 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme$subsystem", 00:20:35.725 "trtype": "$TEST_TRANSPORT", 00:20:35.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "$NVMF_PORT", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.725 "hdgst": ${hdgst:-false}, 00:20:35.725 "ddgst": ${ddgst:-false} 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 } 00:20:35.725 EOF 00:20:35.725 )") 00:20:35.725 11:06:32 -- nvmf/common.sh@543 -- # cat 00:20:35.725 11:06:32 -- nvmf/common.sh@545 -- # jq . 00:20:35.725 11:06:32 -- nvmf/common.sh@546 -- # IFS=, 00:20:35.725 11:06:32 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme1", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme2", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme3", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme4", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme5", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme6", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme7", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme8", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:35.725 "hdgst": false, 00:20:35.725 "ddgst": false 00:20:35.725 }, 00:20:35.725 "method": "bdev_nvme_attach_controller" 00:20:35.725 },{ 00:20:35.725 "params": { 00:20:35.725 "name": "Nvme9", 00:20:35.725 "trtype": "tcp", 00:20:35.725 "traddr": "10.0.0.2", 00:20:35.725 "adrfam": "ipv4", 00:20:35.725 "trsvcid": "4420", 00:20:35.725 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:35.725 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:35.725 "hdgst": false, 00:20:35.726 "ddgst": false 00:20:35.726 }, 00:20:35.726 "method": "bdev_nvme_attach_controller" 00:20:35.726 },{ 00:20:35.726 "params": { 00:20:35.726 "name": "Nvme10", 00:20:35.726 "trtype": "tcp", 00:20:35.726 "traddr": "10.0.0.2", 00:20:35.726 "adrfam": "ipv4", 00:20:35.726 "trsvcid": "4420", 00:20:35.726 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:35.726 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:35.726 "hdgst": false, 00:20:35.726 "ddgst": false 00:20:35.726 }, 00:20:35.726 "method": "bdev_nvme_attach_controller" 00:20:35.726 }' 00:20:35.726 [2024-05-15 11:06:32.208188] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.726 [2024-05-15 11:06:32.272647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.638 Running I/O for 10 seconds... 00:20:37.638 11:06:33 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:37.638 11:06:33 -- common/autotest_common.sh@860 -- # return 0 00:20:37.638 11:06:33 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:37.638 11:06:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.638 11:06:33 -- common/autotest_common.sh@10 -- # set +x 00:20:37.638 11:06:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.638 11:06:33 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.638 11:06:33 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:37.638 11:06:33 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:37.638 11:06:33 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:37.638 11:06:33 -- target/shutdown.sh@57 -- # local ret=1 00:20:37.638 11:06:33 -- target/shutdown.sh@58 -- # local i 00:20:37.638 11:06:33 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:37.638 11:06:33 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:37.638 11:06:33 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:37.638 11:06:33 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:37.638 11:06:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.638 11:06:33 -- common/autotest_common.sh@10 -- # set +x 00:20:37.638 11:06:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.638 11:06:34 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:37.638 11:06:34 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:37.638 11:06:34 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:37.638 11:06:34 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:37.638 11:06:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:37.638 11:06:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:37.638 11:06:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:37.639 11:06:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.639 11:06:34 -- common/autotest_common.sh@10 -- # set +x 00:20:37.898 11:06:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.898 11:06:34 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:37.898 11:06:34 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:37.898 11:06:34 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:38.176 11:06:34 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:38.176 11:06:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.176 11:06:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.176 11:06:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.176 11:06:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.176 11:06:34 -- common/autotest_common.sh@10 -- # set +x 00:20:38.176 11:06:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.176 11:06:34 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:38.176 11:06:34 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:38.176 11:06:34 -- target/shutdown.sh@64 -- # ret=0 00:20:38.176 11:06:34 -- target/shutdown.sh@65 -- # break 00:20:38.176 11:06:34 -- target/shutdown.sh@69 -- # return 0 00:20:38.176 11:06:34 -- target/shutdown.sh@135 -- # killprocess 395136 00:20:38.176 11:06:34 -- common/autotest_common.sh@946 -- # '[' -z 395136 ']' 00:20:38.176 11:06:34 -- common/autotest_common.sh@950 -- # kill -0 395136 00:20:38.176 11:06:34 -- common/autotest_common.sh@951 -- # uname 00:20:38.176 11:06:34 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:38.176 11:06:34 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 395136 00:20:38.176 11:06:34 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:38.176 11:06:34 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:38.176 11:06:34 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 395136' 00:20:38.176 killing process with pid 395136 00:20:38.176 11:06:34 -- common/autotest_common.sh@965 -- # kill 395136 00:20:38.176 [2024-05-15 11:06:34.698820] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:38.176 11:06:34 -- common/autotest_common.sh@970 -- # wait 395136 00:20:38.176 [2024-05-15 11:06:34.699240] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699266] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699272] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699277] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699282] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699287] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699292] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699297] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699302] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699311] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699316] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699321] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699326] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699331] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699335] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699340] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699344] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699349] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699353] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699358] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699363] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699367] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699372] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699376] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699381] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699386] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699390] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699395] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699400] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699404] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699408] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699413] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699417] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699421] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699426] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699430] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699436] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699441] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699445] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699449] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699453] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699458] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699462] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699467] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699472] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699477] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699481] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699486] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699490] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699495] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699499] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699503] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699508] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699512] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699516] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699521] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699526] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699530] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699534] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699540] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699549] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699554] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.699558] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17020 is same with the state(5) to be set 00:20:38.176 [2024-05-15 11:06:34.700405] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700426] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700432] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700438] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700443] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700449] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700454] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700458] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700463] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700467] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700472] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700477] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700482] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700486] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700491] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700495] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700500] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700505] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700509] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700514] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700518] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700524] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700529] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700533] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700538] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700542] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700550] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700602] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700607] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700612] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700616] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700621] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700625] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700630] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700635] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700640] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700645] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700650] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700655] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700659] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700664] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700669] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700673] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700678] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700682] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700687] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700692] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700697] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700702] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700706] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700710] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700715] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700719] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700724] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700730] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700735] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700739] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700744] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700748] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700753] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700757] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700762] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.700766] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c192e0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701649] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701659] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701665] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701670] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701676] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701680] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701685] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701689] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701694] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701698] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701702] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701707] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701713] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701718] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701722] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701727] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701732] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701736] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701744] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701748] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701753] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701757] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.177 [2024-05-15 11:06:34.701763] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701768] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701772] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701777] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701781] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701787] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701792] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701796] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701801] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701806] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701811] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701816] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701821] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701825] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701829] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701834] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701838] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701843] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701847] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701852] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701857] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701861] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701866] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701871] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701876] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701880] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701885] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701889] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701893] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701898] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701903] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701908] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701912] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701916] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701921] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701925] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701929] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701934] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701938] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701943] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.701948] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c174c0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.702855] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17960 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703536] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703553] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703559] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703583] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703588] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703593] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703598] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703602] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703608] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703615] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703620] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703625] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703631] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703636] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703640] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703645] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703651] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703656] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703660] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703665] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703670] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703676] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703681] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703686] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703691] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703697] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703702] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703706] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703711] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703715] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703720] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703725] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703730] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703734] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703739] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703744] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703750] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703755] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703760] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703765] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703770] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.178 [2024-05-15 11:06:34.703775] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703780] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703785] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703789] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703794] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703799] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703803] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703812] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703818] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703823] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703828] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703832] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703837] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703841] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703846] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703850] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703855] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703859] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703864] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703869] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.703873] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c182a0 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705028] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705044] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705049] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705054] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705058] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705063] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705068] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705072] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705077] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705082] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705088] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705093] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705097] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705102] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705107] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705111] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705116] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705120] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705125] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705129] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705134] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705138] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705143] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705148] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705152] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705157] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705161] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705169] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705173] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705178] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705183] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705188] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705192] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705197] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705202] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705206] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705210] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705215] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705219] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705224] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705228] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705232] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705237] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705242] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705247] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705251] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705256] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705260] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705264] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705268] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705273] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705278] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705282] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705287] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705293] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705298] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705303] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705307] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705311] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705316] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705320] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.179 [2024-05-15 11:06:34.705325] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705329] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18610 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705872] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705888] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705893] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705897] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705902] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705907] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705911] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705916] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705921] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705925] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705930] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705935] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705939] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705944] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705948] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705953] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705958] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705962] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705970] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705975] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705980] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705985] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705989] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705994] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.705998] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706003] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706007] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706012] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706016] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706021] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706025] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706030] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706035] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706040] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706044] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706049] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706053] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706057] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706062] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706066] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706070] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706075] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706080] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706085] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706089] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706096] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706101] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706105] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706110] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706115] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706119] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706125] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706130] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706134] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706139] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706143] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706148] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706152] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706157] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.180 [2024-05-15 11:06:34.706161] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706166] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706170] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706175] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706180] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18ad0 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706615] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706629] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706635] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706640] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706647] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706652] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706656] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706662] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706669] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706674] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706679] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706683] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706688] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706693] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706697] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706702] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706706] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706712] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706716] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706721] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706725] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706730] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706734] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706738] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706743] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706748] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706752] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706756] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706761] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706766] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706771] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706775] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706780] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706784] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706789] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706793] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706798] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706803] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706808] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706812] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706817] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706821] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706826] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706830] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706834] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706839] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706843] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.706847] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.710663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177fc10 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.710780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f22a0 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.710869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19236c0 is same with the state(5) to be set 00:20:38.181 [2024-05-15 11:06:34.710955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.181 [2024-05-15 11:06:34.710978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.181 [2024-05-15 11:06:34.710986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.710993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175d960 is same with the state(5) to be set 00:20:38.182 [2024-05-15 11:06:34.711036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781760 is same with the state(5) to be set 00:20:38.182 [2024-05-15 11:06:34.711119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788a40 is same with the state(5) to be set 00:20:38.182 [2024-05-15 11:06:34.711202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3200 is same with the state(5) to be set 00:20:38.182 [2024-05-15 11:06:34.711287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177fdf0 is same with the state(5) to be set 00:20:38.182 [2024-05-15 11:06:34.711372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.182 [2024-05-15 11:06:34.711425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.711433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe60 is same with the state(5) to be set 00:20:38.182 [2024-05-15 11:06:34.712328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.182 [2024-05-15 11:06:34.712611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.182 [2024-05-15 11:06:34.712620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.712986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.712993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.183 [2024-05-15 11:06:34.713260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.183 [2024-05-15 11:06:34.713269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188bcb0 is same with the state(5) to be set 00:20:38.184 [2024-05-15 11:06:34.713455] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x188bcb0 was disconnected and freed. reset controller. 00:20:38.184 [2024-05-15 11:06:34.713585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.713985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.713992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.714000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.714007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.714016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.714024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.714034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.714041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.714050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.714057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.714066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.714073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.184 [2024-05-15 11:06:34.714083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.184 [2024-05-15 11:06:34.714089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.714098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.714105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.714114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.714121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.714130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.714137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.717014] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717033] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717040] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717046] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717051] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717056] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717060] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717065] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717069] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717074] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717079] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717084] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717092] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717096] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.717101] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18f70 is same with the state(5) to be set 00:20:38.185 [2024-05-15 11:06:34.726742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.726986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.726996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.185 [2024-05-15 11:06:34.727155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.185 [2024-05-15 11:06:34.727165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.727173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.727184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.727191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.727200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.727208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.727217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.727224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.727234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.727241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.727250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.727257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.727266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.727274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.727342] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1755f90 was disconnected and freed. reset controller. 00:20:38.186 [2024-05-15 11:06:34.742104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.186 [2024-05-15 11:06:34.742138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.186 [2024-05-15 11:06:34.742156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.186 [2024-05-15 11:06:34.742172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.186 [2024-05-15 11:06:34.742188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a9a20 is same with the state(5) to be set 00:20:38.186 [2024-05-15 11:06:34.742219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fc10 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f22a0 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19236c0 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175d960 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1781760 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788a40 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f3200 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fdf0 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe60 (9): Bad file descriptor 00:20:38.186 [2024-05-15 11:06:34.742471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.186 [2024-05-15 11:06:34.742886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.186 [2024-05-15 11:06:34.742896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.742903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.742912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.742920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.742929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.742936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.742946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.742953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.742962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.742970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.742979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.742986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.742996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.187 [2024-05-15 11:06:34.743543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.187 [2024-05-15 11:06:34.743559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.743618] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x188a740 was disconnected and freed. reset controller. 00:20:38.188 [2024-05-15 11:06:34.746119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.188 [2024-05-15 11:06:34.746664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.188 [2024-05-15 11:06:34.746673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.746987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.746996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.189 [2024-05-15 11:06:34.747200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.189 [2024-05-15 11:06:34.747207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.747216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.747223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.747276] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1759ca0 was disconnected and freed. reset controller. 00:20:38.190 [2024-05-15 11:06:34.747341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:38.190 [2024-05-15 11:06:34.748700] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.190 [2024-05-15 11:06:34.749949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:38.190 [2024-05-15 11:06:34.749975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:38.190 [2024-05-15 11:06:34.750311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.750757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.750795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177fc10 with addr=10.0.0.2, port=4420 00:20:38.190 [2024-05-15 11:06:34.750808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177fc10 is same with the state(5) to be set 00:20:38.190 [2024-05-15 11:06:34.751226] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.190 [2024-05-15 11:06:34.751553] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.190 [2024-05-15 11:06:34.751609] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.190 [2024-05-15 11:06:34.751625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:38.190 [2024-05-15 11:06:34.751643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a9a20 (9): Bad file descriptor 00:20:38.190 [2024-05-15 11:06:34.751848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.752167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.752178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe60 with addr=10.0.0.2, port=4420 00:20:38.190 [2024-05-15 11:06:34.752191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe60 is same with the state(5) to be set 00:20:38.190 [2024-05-15 11:06:34.752471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.752664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.752675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788a40 with addr=10.0.0.2, port=4420 00:20:38.190 [2024-05-15 11:06:34.752683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788a40 is same with the state(5) to be set 00:20:38.190 [2024-05-15 11:06:34.752692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fc10 (9): Bad file descriptor 00:20:38.190 [2024-05-15 11:06:34.752744] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.190 [2024-05-15 11:06:34.752782] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.190 [2024-05-15 11:06:34.753369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe60 (9): Bad file descriptor 00:20:38.190 [2024-05-15 11:06:34.753384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788a40 (9): Bad file descriptor 00:20:38.190 [2024-05-15 11:06:34.753393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:38.190 [2024-05-15 11:06:34.753400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:38.190 [2024-05-15 11:06:34.753408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:38.190 [2024-05-15 11:06:34.753526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.190 [2024-05-15 11:06:34.753877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.754070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.190 [2024-05-15 11:06:34.754080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9a20 with addr=10.0.0.2, port=4420 00:20:38.190 [2024-05-15 11:06:34.754088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a9a20 is same with the state(5) to be set 00:20:38.190 [2024-05-15 11:06:34.754097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:38.190 [2024-05-15 11:06:34.754104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:38.190 [2024-05-15 11:06:34.754110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:38.190 [2024-05-15 11:06:34.754123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:38.190 [2024-05-15 11:06:34.754129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:38.190 [2024-05-15 11:06:34.754136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:38.190 [2024-05-15 11:06:34.754173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.190 [2024-05-15 11:06:34.754522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.190 [2024-05-15 11:06:34.754531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.754985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.754992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.755001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.755008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.755018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.755025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.755034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.755041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.755050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.755058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.755067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.755074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.191 [2024-05-15 11:06:34.755083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.191 [2024-05-15 11:06:34.755091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.755273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.755283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b840 is same with the state(5) to be set 00:20:38.192 [2024-05-15 11:06:34.756571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.756990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.756998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.757015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.757025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.757032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.757043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.757049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.757059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.192 [2024-05-15 11:06:34.757066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.192 [2024-05-15 11:06:34.757076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.193 [2024-05-15 11:06:34.757619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.193 [2024-05-15 11:06:34.757626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.757635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.757642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.757651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.757658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.757668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.757677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.757684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18892b0 is same with the state(5) to be set 00:20:38.194 [2024-05-15 11:06:34.758952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.758966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.758978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.758987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.758999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.194 [2024-05-15 11:06:34.759445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.194 [2024-05-15 11:06:34.759453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.759992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.759999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.760009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.760016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.760026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.760034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.760043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.760050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.760058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d1d0 is same with the state(5) to be set 00:20:38.195 [2024-05-15 11:06:34.761317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.195 [2024-05-15 11:06:34.761329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.195 [2024-05-15 11:06:34.761342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.196 [2024-05-15 11:06:34.761907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.196 [2024-05-15 11:06:34.761915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.761924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.761932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.761943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.761951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.761960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.761968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.761977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.761985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.761995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.762428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.762436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17574b0 is same with the state(5) to be set 00:20:38.197 [2024-05-15 11:06:34.763695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.197 [2024-05-15 11:06:34.763869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.197 [2024-05-15 11:06:34.763876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.763886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.763894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.763903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.763910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.763919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.763926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.763935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.763943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.763953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.763960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.763971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.763978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.763987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.763994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.764300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.764309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.769901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.769948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.769957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.769968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.769975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.769986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.769994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.198 [2024-05-15 11:06:34.770178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.198 [2024-05-15 11:06:34.770186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.770425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.770434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17589d0 is same with the state(5) to be set 00:20:38.199 [2024-05-15 11:06:34.775925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.775958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.775977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.775985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.775995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.199 [2024-05-15 11:06:34.776222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.199 [2024-05-15 11:06:34.776231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.200 [2024-05-15 11:06:34.776900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.200 [2024-05-15 11:06:34.776910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.776919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.776929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.776936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.776947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.776955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.776964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.776971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.776980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.776987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.776997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.777004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.777013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.777020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.777030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.201 [2024-05-15 11:06:34.777037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.201 [2024-05-15 11:06:34.777046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18041f0 is same with the state(5) to be set 00:20:38.201 [2024-05-15 11:06:34.778551] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.201 [2024-05-15 11:06:34.778570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.201 [2024-05-15 11:06:34.778580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.201 [2024-05-15 11:06:34.778593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:38.201 [2024-05-15 11:06:34.778603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:38.201 [2024-05-15 11:06:34.778645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a9a20 (9): Bad file descriptor 00:20:38.201 [2024-05-15 11:06:34.778692] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.201 [2024-05-15 11:06:34.778707] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.201 [2024-05-15 11:06:34.778722] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.201 [2024-05-15 11:06:34.778734] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.201 [2024-05-15 11:06:34.778810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:38.201 [2024-05-15 11:06:34.778825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:38.201 task offset: 24576 on job bdev=Nvme4n1 fails 00:20:38.201 00:20:38.201 Latency(us) 00:20:38.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.201 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme1n1 ended in about 0.98 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme1n1 : 0.98 131.11 8.19 65.56 0.00 321922.56 20643.84 279620.27 00:20:38.201 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme2n1 ended in about 0.98 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme2n1 : 0.98 136.93 8.56 65.40 0.00 306773.02 19005.44 262144.00 00:20:38.201 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme3n1 ended in about 0.97 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme3n1 : 0.97 198.27 12.39 66.09 0.00 229864.53 35170.99 237677.23 00:20:38.201 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme4n1 ended in about 0.96 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme4n1 : 0.96 199.05 12.44 66.35 0.00 224170.24 33860.27 249910.61 00:20:38.201 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme5n1 ended in about 0.98 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme5n1 : 0.98 130.48 8.15 65.24 0.00 298073.88 16820.91 251658.24 00:20:38.201 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme6n1 ended in about 0.97 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme6n1 : 0.97 198.80 12.43 66.27 0.00 214950.40 18896.21 219327.15 00:20:38.201 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme7n1 ended in about 0.98 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme7n1 : 0.98 195.25 12.20 65.08 0.00 214606.93 18022.40 242920.11 00:20:38.201 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme8n1 ended in about 0.99 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme8n1 : 0.99 197.71 12.36 64.56 0.00 208596.26 10868.05 246415.36 00:20:38.201 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme9n1 ended in about 0.97 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme9n1 : 0.97 195.94 12.25 66.00 0.00 203272.22 6990.51 277872.64 00:20:38.201 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.201 Job: Nvme10n1 ended in about 1.00 seconds with error 00:20:38.201 Verification LBA range: start 0x0 length 0x400 00:20:38.201 Nvme10n1 : 1.00 192.39 12.02 64.13 0.00 204013.44 19770.03 246415.36 00:20:38.201 =================================================================================================================== 00:20:38.201 Total : 1775.93 111.00 654.67 0.00 237404.70 6990.51 279620.27 00:20:38.201 [2024-05-15 11:06:34.802996] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:38.201 [2024-05-15 11:06:34.803039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:38.201 [2024-05-15 11:06:34.803440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.803658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.803670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175d960 with addr=10.0.0.2, port=4420 00:20:38.201 [2024-05-15 11:06:34.803687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175d960 is same with the state(5) to be set 00:20:38.201 [2024-05-15 11:06:34.803981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.804307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.804317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1781760 with addr=10.0.0.2, port=4420 00:20:38.201 [2024-05-15 11:06:34.804324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781760 is same with the state(5) to be set 00:20:38.201 [2024-05-15 11:06:34.804505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.804843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.804853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177fdf0 with addr=10.0.0.2, port=4420 00:20:38.201 [2024-05-15 11:06:34.804861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177fdf0 is same with the state(5) to be set 00:20:38.201 [2024-05-15 11:06:34.804868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:38.201 [2024-05-15 11:06:34.804875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:38.201 [2024-05-15 11:06:34.804883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:38.201 [2024-05-15 11:06:34.806494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:38.201 [2024-05-15 11:06:34.806509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:38.201 [2024-05-15 11:06:34.806518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.201 [2024-05-15 11:06:34.806876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.807058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.807069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19236c0 with addr=10.0.0.2, port=4420 00:20:38.201 [2024-05-15 11:06:34.807077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19236c0 is same with the state(5) to be set 00:20:38.201 [2024-05-15 11:06:34.807281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.807486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.807496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f22a0 with addr=10.0.0.2, port=4420 00:20:38.201 [2024-05-15 11:06:34.807503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f22a0 is same with the state(5) to be set 00:20:38.201 [2024-05-15 11:06:34.807699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.807907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.201 [2024-05-15 11:06:34.807917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f3200 with addr=10.0.0.2, port=4420 00:20:38.201 [2024-05-15 11:06:34.807924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3200 is same with the state(5) to be set 00:20:38.201 [2024-05-15 11:06:34.807937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175d960 (9): Bad file descriptor 00:20:38.201 [2024-05-15 11:06:34.807949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1781760 (9): Bad file descriptor 00:20:38.201 [2024-05-15 11:06:34.807958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fdf0 (9): Bad file descriptor 00:20:38.201 [2024-05-15 11:06:34.807992] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.202 [2024-05-15 11:06:34.808016] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.202 [2024-05-15 11:06:34.808030] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.202 [2024-05-15 11:06:34.808040] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.202 [2024-05-15 11:06:34.808105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:38.202 [2024-05-15 11:06:34.808478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.808794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.808804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177fc10 with addr=10.0.0.2, port=4420 00:20:38.202 [2024-05-15 11:06:34.808812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177fc10 is same with the state(5) to be set 00:20:38.202 [2024-05-15 11:06:34.808986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.809345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.809355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788a40 with addr=10.0.0.2, port=4420 00:20:38.202 [2024-05-15 11:06:34.809362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788a40 is same with the state(5) to be set 00:20:38.202 [2024-05-15 11:06:34.809371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19236c0 (9): Bad file descriptor 00:20:38.202 [2024-05-15 11:06:34.809380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f22a0 (9): Bad file descriptor 00:20:38.202 [2024-05-15 11:06:34.809389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f3200 (9): Bad file descriptor 00:20:38.202 [2024-05-15 11:06:34.809397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.809403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.809411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.202 [2024-05-15 11:06:34.809422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.809428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.809435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:38.202 [2024-05-15 11:06:34.809446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.809452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.809459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:38.202 [2024-05-15 11:06:34.809525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:38.202 [2024-05-15 11:06:34.809535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.809541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.809551] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.809895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.810201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.810210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe60 with addr=10.0.0.2, port=4420 00:20:38.202 [2024-05-15 11:06:34.810221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe60 is same with the state(5) to be set 00:20:38.202 [2024-05-15 11:06:34.810230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fc10 (9): Bad file descriptor 00:20:38.202 [2024-05-15 11:06:34.810239] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788a40 (9): Bad file descriptor 00:20:38.202 [2024-05-15 11:06:34.810247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.810253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.810260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:38.202 [2024-05-15 11:06:34.810270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.810276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.810283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:38.202 [2024-05-15 11:06:34.810515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.810524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.810531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:38.202 [2024-05-15 11:06:34.810567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.810574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.810581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.810637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.810975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.202 [2024-05-15 11:06:34.810985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a9a20 with addr=10.0.0.2, port=4420 00:20:38.202 [2024-05-15 11:06:34.810995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a9a20 is same with the state(5) to be set 00:20:38.202 [2024-05-15 11:06:34.811005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe60 (9): Bad file descriptor 00:20:38.202 [2024-05-15 11:06:34.811013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.811020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.811026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:38.202 [2024-05-15 11:06:34.811036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.811043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.811050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:38.202 [2024-05-15 11:06:34.811092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.811101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.811109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a9a20 (9): Bad file descriptor 00:20:38.202 [2024-05-15 11:06:34.811118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.811127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.811134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:38.202 [2024-05-15 11:06:34.811163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.202 [2024-05-15 11:06:34.811170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:38.202 [2024-05-15 11:06:34.811177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:38.202 [2024-05-15 11:06:34.811185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:38.202 [2024-05-15 11:06:34.811213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.463 11:06:34 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:38.463 11:06:34 -- target/shutdown.sh@139 -- # sleep 1 00:20:39.404 11:06:35 -- target/shutdown.sh@142 -- # kill -9 395344 00:20:39.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (395344) - No such process 00:20:39.404 11:06:35 -- target/shutdown.sh@142 -- # true 00:20:39.404 11:06:35 -- target/shutdown.sh@144 -- # stoptarget 00:20:39.404 11:06:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:39.404 11:06:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.404 11:06:35 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.404 11:06:35 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:39.404 11:06:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.404 11:06:35 -- nvmf/common.sh@117 -- # sync 00:20:39.404 11:06:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.404 11:06:35 -- nvmf/common.sh@120 -- # set +e 00:20:39.404 11:06:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.404 11:06:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.404 rmmod nvme_tcp 00:20:39.404 rmmod nvme_fabrics 00:20:39.404 rmmod nvme_keyring 00:20:39.404 11:06:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.404 11:06:36 -- nvmf/common.sh@124 -- # set -e 00:20:39.404 11:06:36 -- nvmf/common.sh@125 -- # return 0 00:20:39.404 11:06:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:39.404 11:06:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:39.404 11:06:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:39.404 11:06:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:39.404 11:06:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.404 11:06:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.404 11:06:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.404 11:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.404 11:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.948 11:06:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.948 00:20:41.948 real 0m7.765s 00:20:41.948 user 0m18.990s 00:20:41.948 sys 0m1.197s 00:20:41.948 11:06:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:41.948 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.948 ************************************ 00:20:41.948 END TEST nvmf_shutdown_tc3 00:20:41.948 ************************************ 00:20:41.948 11:06:38 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:41.948 00:20:41.948 real 0m32.567s 00:20:41.948 user 1m17.182s 00:20:41.948 sys 0m9.117s 00:20:41.948 11:06:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:41.948 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.948 ************************************ 00:20:41.948 END TEST nvmf_shutdown 00:20:41.948 ************************************ 00:20:41.948 11:06:38 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:41.948 11:06:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.948 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.948 11:06:38 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:41.948 11:06:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:41.948 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.948 11:06:38 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:41.948 11:06:38 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:41.948 11:06:38 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:41.948 11:06:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:41.948 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.948 ************************************ 00:20:41.948 START TEST nvmf_multicontroller 00:20:41.948 ************************************ 00:20:41.948 11:06:38 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:41.948 * Looking for test storage... 00:20:41.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:41.948 11:06:38 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.948 11:06:38 -- nvmf/common.sh@7 -- # uname -s 00:20:41.948 11:06:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.948 11:06:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.948 11:06:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.948 11:06:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.948 11:06:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.948 11:06:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.948 11:06:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.948 11:06:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.948 11:06:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.948 11:06:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.948 11:06:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.948 11:06:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.948 11:06:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.948 11:06:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.948 11:06:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.948 11:06:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.948 11:06:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.948 11:06:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.948 11:06:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.948 11:06:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.948 11:06:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.948 11:06:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.948 11:06:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.948 11:06:38 -- paths/export.sh@5 -- # export PATH 00:20:41.948 11:06:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.948 11:06:38 -- nvmf/common.sh@47 -- # : 0 00:20:41.948 11:06:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.948 11:06:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.948 11:06:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.948 11:06:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.948 11:06:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.948 11:06:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.948 11:06:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.948 11:06:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.948 11:06:38 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.948 11:06:38 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.948 11:06:38 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:41.948 11:06:38 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:41.948 11:06:38 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.948 11:06:38 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:41.949 11:06:38 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:41.949 11:06:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:41.949 11:06:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.949 11:06:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:41.949 11:06:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:41.949 11:06:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:41.949 11:06:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.949 11:06:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.949 11:06:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.949 11:06:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:41.949 11:06:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:41.949 11:06:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.949 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:20:50.090 11:06:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:50.090 11:06:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.090 11:06:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.090 11:06:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.090 11:06:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.090 11:06:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.090 11:06:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.090 11:06:45 -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.090 11:06:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.090 11:06:45 -- nvmf/common.sh@296 -- # e810=() 00:20:50.090 11:06:45 -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.090 11:06:45 -- nvmf/common.sh@297 -- # x722=() 00:20:50.090 11:06:45 -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.090 11:06:45 -- nvmf/common.sh@298 -- # mlx=() 00:20:50.090 11:06:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.090 11:06:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.090 11:06:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.090 11:06:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.090 11:06:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.090 11:06:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.090 11:06:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:50.090 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:50.090 11:06:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.090 11:06:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:50.090 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:50.090 11:06:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.090 11:06:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.090 11:06:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.090 11:06:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:50.090 11:06:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.090 11:06:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:50.090 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:50.090 11:06:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.090 11:06:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.090 11:06:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.090 11:06:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:50.090 11:06:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.090 11:06:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:50.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:50.090 11:06:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.090 11:06:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:50.090 11:06:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:50.090 11:06:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:50.090 11:06:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.090 11:06:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.090 11:06:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.090 11:06:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.090 11:06:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.090 11:06:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.090 11:06:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.090 11:06:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.090 11:06:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.090 11:06:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.090 11:06:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.090 11:06:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.090 11:06:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.090 11:06:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.090 11:06:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.090 11:06:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.090 11:06:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.090 11:06:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.090 11:06:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.090 11:06:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:20:50.090 00:20:50.090 --- 10.0.0.2 ping statistics --- 00:20:50.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.090 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:20:50.090 11:06:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:20:50.090 00:20:50.090 --- 10.0.0.1 ping statistics --- 00:20:50.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.090 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:20:50.090 11:06:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.090 11:06:45 -- nvmf/common.sh@411 -- # return 0 00:20:50.090 11:06:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:50.090 11:06:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.090 11:06:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:50.090 11:06:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.090 11:06:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:50.090 11:06:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:50.090 11:06:45 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:50.090 11:06:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:50.090 11:06:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:50.090 11:06:45 -- common/autotest_common.sh@10 -- # set +x 00:20:50.090 11:06:45 -- nvmf/common.sh@470 -- # nvmfpid=400275 00:20:50.090 11:06:45 -- nvmf/common.sh@471 -- # waitforlisten 400275 00:20:50.090 11:06:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:50.090 11:06:45 -- common/autotest_common.sh@827 -- # '[' -z 400275 ']' 00:20:50.090 11:06:45 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.090 11:06:45 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:50.090 11:06:45 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.090 11:06:45 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:50.090 11:06:45 -- common/autotest_common.sh@10 -- # set +x 00:20:50.090 [2024-05-15 11:06:45.616252] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:50.090 [2024-05-15 11:06:45.616316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.090 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.090 [2024-05-15 11:06:45.705201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:50.090 [2024-05-15 11:06:45.799249] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.090 [2024-05-15 11:06:45.799297] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.090 [2024-05-15 11:06:45.799305] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.090 [2024-05-15 11:06:45.799312] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.090 [2024-05-15 11:06:45.799319] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.090 [2024-05-15 11:06:45.799445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.090 [2024-05-15 11:06:45.799635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.090 [2024-05-15 11:06:45.799646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.090 11:06:46 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:50.090 11:06:46 -- common/autotest_common.sh@860 -- # return 0 00:20:50.090 11:06:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:50.090 11:06:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.090 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.090 11:06:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.090 11:06:46 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:50.090 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.090 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 [2024-05-15 11:06:46.442171] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 Malloc0 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 [2024-05-15 11:06:46.511751] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:50.091 [2024-05-15 11:06:46.511951] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 [2024-05-15 11:06:46.523898] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 Malloc1 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:50.091 11:06:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 11:06:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.091 11:06:46 -- host/multicontroller.sh@44 -- # bdevperf_pid=400625 00:20:50.091 11:06:46 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.091 11:06:46 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:50.091 11:06:46 -- host/multicontroller.sh@47 -- # waitforlisten 400625 /var/tmp/bdevperf.sock 00:20:50.091 11:06:46 -- common/autotest_common.sh@827 -- # '[' -z 400625 ']' 00:20:50.091 11:06:46 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.091 11:06:46 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:50.091 11:06:46 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.091 11:06:46 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:50.091 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:20:51.033 11:06:47 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:51.033 11:06:47 -- common/autotest_common.sh@860 -- # return 0 00:20:51.034 11:06:47 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:51.034 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.034 NVMe0n1 00:20:51.034 11:06:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.034 11:06:47 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:51.034 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.034 11:06:47 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:51.034 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.034 11:06:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.034 1 00:20:51.034 11:06:47 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:51.034 11:06:47 -- common/autotest_common.sh@648 -- # local es=0 00:20:51.034 11:06:47 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:51.034 11:06:47 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:51.034 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.034 request: 00:20:51.034 { 00:20:51.034 "name": "NVMe0", 00:20:51.034 "trtype": "tcp", 00:20:51.034 "traddr": "10.0.0.2", 00:20:51.034 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:51.034 "hostaddr": "10.0.0.2", 00:20:51.034 "hostsvcid": "60000", 00:20:51.034 "adrfam": "ipv4", 00:20:51.034 "trsvcid": "4420", 00:20:51.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.034 "method": "bdev_nvme_attach_controller", 00:20:51.034 "req_id": 1 00:20:51.034 } 00:20:51.034 Got JSON-RPC error response 00:20:51.034 response: 00:20:51.034 { 00:20:51.034 "code": -114, 00:20:51.034 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:51.034 } 00:20:51.034 11:06:47 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # es=1 00:20:51.034 11:06:47 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.034 11:06:47 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.034 11:06:47 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:51.034 11:06:47 -- common/autotest_common.sh@648 -- # local es=0 00:20:51.034 11:06:47 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:51.034 11:06:47 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:51.034 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.034 request: 00:20:51.034 { 00:20:51.034 "name": "NVMe0", 00:20:51.034 "trtype": "tcp", 00:20:51.034 "traddr": "10.0.0.2", 00:20:51.034 "hostaddr": "10.0.0.2", 00:20:51.034 "hostsvcid": "60000", 00:20:51.034 "adrfam": "ipv4", 00:20:51.034 "trsvcid": "4420", 00:20:51.034 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.034 "method": "bdev_nvme_attach_controller", 00:20:51.034 "req_id": 1 00:20:51.034 } 00:20:51.034 Got JSON-RPC error response 00:20:51.034 response: 00:20:51.034 { 00:20:51.034 "code": -114, 00:20:51.034 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:51.034 } 00:20:51.034 11:06:47 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # es=1 00:20:51.034 11:06:47 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.034 11:06:47 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.034 11:06:47 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@648 -- # local es=0 00:20:51.034 11:06:47 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.034 request: 00:20:51.034 { 00:20:51.034 "name": "NVMe0", 00:20:51.034 "trtype": "tcp", 00:20:51.034 "traddr": "10.0.0.2", 00:20:51.034 "hostaddr": "10.0.0.2", 00:20:51.034 "hostsvcid": "60000", 00:20:51.034 "adrfam": "ipv4", 00:20:51.034 "trsvcid": "4420", 00:20:51.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.034 "multipath": "disable", 00:20:51.034 "method": "bdev_nvme_attach_controller", 00:20:51.034 "req_id": 1 00:20:51.034 } 00:20:51.034 Got JSON-RPC error response 00:20:51.034 response: 00:20:51.034 { 00:20:51.034 "code": -114, 00:20:51.034 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:51.034 } 00:20:51.034 11:06:47 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # es=1 00:20:51.034 11:06:47 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.034 11:06:47 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.034 11:06:47 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:51.034 11:06:47 -- common/autotest_common.sh@648 -- # local es=0 00:20:51.034 11:06:47 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:51.034 11:06:47 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:51.034 11:06:47 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:51.034 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.034 request: 00:20:51.034 { 00:20:51.034 "name": "NVMe0", 00:20:51.034 "trtype": "tcp", 00:20:51.034 "traddr": "10.0.0.2", 00:20:51.034 "hostaddr": "10.0.0.2", 00:20:51.034 "hostsvcid": "60000", 00:20:51.034 "adrfam": "ipv4", 00:20:51.034 "trsvcid": "4420", 00:20:51.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.034 "multipath": "failover", 00:20:51.034 "method": "bdev_nvme_attach_controller", 00:20:51.034 "req_id": 1 00:20:51.034 } 00:20:51.034 Got JSON-RPC error response 00:20:51.034 response: 00:20:51.034 { 00:20:51.034 "code": -114, 00:20:51.034 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:51.034 } 00:20:51.034 11:06:47 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@651 -- # es=1 00:20:51.034 11:06:47 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.034 11:06:47 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.034 11:06:47 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.034 11:06:47 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:51.034 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.034 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 00:20:51.295 11:06:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.295 11:06:47 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:51.295 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.295 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 11:06:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.295 11:06:47 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:51.295 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.295 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 00:20:51.295 11:06:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.295 11:06:47 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:51.295 11:06:47 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:51.295 11:06:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.295 11:06:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 11:06:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.295 11:06:47 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:51.295 11:06:47 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.680 0 00:20:52.680 11:06:48 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:52.680 11:06:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.680 11:06:48 -- common/autotest_common.sh@10 -- # set +x 00:20:52.680 11:06:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.680 11:06:48 -- host/multicontroller.sh@100 -- # killprocess 400625 00:20:52.680 11:06:48 -- common/autotest_common.sh@946 -- # '[' -z 400625 ']' 00:20:52.680 11:06:48 -- common/autotest_common.sh@950 -- # kill -0 400625 00:20:52.680 11:06:48 -- common/autotest_common.sh@951 -- # uname 00:20:52.680 11:06:48 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.680 11:06:48 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 400625 00:20:52.680 11:06:49 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:52.680 11:06:49 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:52.680 11:06:49 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 400625' 00:20:52.680 killing process with pid 400625 00:20:52.680 11:06:49 -- common/autotest_common.sh@965 -- # kill 400625 00:20:52.680 11:06:49 -- common/autotest_common.sh@970 -- # wait 400625 00:20:52.680 11:06:49 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.680 11:06:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.680 11:06:49 -- common/autotest_common.sh@10 -- # set +x 00:20:52.680 11:06:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.680 11:06:49 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:52.680 11:06:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.680 11:06:49 -- common/autotest_common.sh@10 -- # set +x 00:20:52.680 11:06:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.680 11:06:49 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:52.680 11:06:49 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:52.680 11:06:49 -- common/autotest_common.sh@1608 -- # read -r file 00:20:52.680 11:06:49 -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:52.680 11:06:49 -- common/autotest_common.sh@1607 -- # sort -u 00:20:52.680 11:06:49 -- common/autotest_common.sh@1609 -- # cat 00:20:52.680 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:52.680 [2024-05-15 11:06:46.643354] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:20:52.680 [2024-05-15 11:06:46.643408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400625 ] 00:20:52.680 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.680 [2024-05-15 11:06:46.701613] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.680 [2024-05-15 11:06:46.765199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.680 [2024-05-15 11:06:47.821369] bdev.c:4555:bdev_name_add: *ERROR*: Bdev name 475adf01-0a39-46f4-b047-199aedd83d26 already exists 00:20:52.680 [2024-05-15 11:06:47.821400] bdev.c:7672:bdev_register: *ERROR*: Unable to add uuid:475adf01-0a39-46f4-b047-199aedd83d26 alias for bdev NVMe1n1 00:20:52.680 [2024-05-15 11:06:47.821410] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:52.680 Running I/O for 1 seconds... 00:20:52.680 00:20:52.680 Latency(us) 00:20:52.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.680 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:52.680 NVMe0n1 : 1.00 27978.17 109.29 0.00 0.00 4564.60 2116.27 8738.13 00:20:52.680 =================================================================================================================== 00:20:52.680 Total : 27978.17 109.29 0.00 0.00 4564.60 2116.27 8738.13 00:20:52.680 Received shutdown signal, test time was about 1.000000 seconds 00:20:52.680 00:20:52.680 Latency(us) 00:20:52.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.680 =================================================================================================================== 00:20:52.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.680 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:52.680 11:06:49 -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:52.680 11:06:49 -- common/autotest_common.sh@1608 -- # read -r file 00:20:52.680 11:06:49 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:52.680 11:06:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:52.680 11:06:49 -- nvmf/common.sh@117 -- # sync 00:20:52.680 11:06:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.680 11:06:49 -- nvmf/common.sh@120 -- # set +e 00:20:52.680 11:06:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.680 11:06:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.680 rmmod nvme_tcp 00:20:52.680 rmmod nvme_fabrics 00:20:52.680 rmmod nvme_keyring 00:20:52.680 11:06:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.680 11:06:49 -- nvmf/common.sh@124 -- # set -e 00:20:52.680 11:06:49 -- nvmf/common.sh@125 -- # return 0 00:20:52.680 11:06:49 -- nvmf/common.sh@478 -- # '[' -n 400275 ']' 00:20:52.680 11:06:49 -- nvmf/common.sh@479 -- # killprocess 400275 00:20:52.680 11:06:49 -- common/autotest_common.sh@946 -- # '[' -z 400275 ']' 00:20:52.680 11:06:49 -- common/autotest_common.sh@950 -- # kill -0 400275 00:20:52.680 11:06:49 -- common/autotest_common.sh@951 -- # uname 00:20:52.680 11:06:49 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.680 11:06:49 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 400275 00:20:52.680 11:06:49 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:52.680 11:06:49 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:52.680 11:06:49 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 400275' 00:20:52.680 killing process with pid 400275 00:20:52.680 11:06:49 -- common/autotest_common.sh@965 -- # kill 400275 00:20:52.680 [2024-05-15 11:06:49.320066] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:52.680 11:06:49 -- common/autotest_common.sh@970 -- # wait 400275 00:20:52.941 11:06:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:52.941 11:06:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:52.941 11:06:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:52.941 11:06:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.941 11:06:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.941 11:06:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.941 11:06:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.941 11:06:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.483 11:06:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.483 00:20:55.483 real 0m13.242s 00:20:55.483 user 0m15.901s 00:20:55.483 sys 0m5.998s 00:20:55.483 11:06:51 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:55.483 11:06:51 -- common/autotest_common.sh@10 -- # set +x 00:20:55.483 ************************************ 00:20:55.483 END TEST nvmf_multicontroller 00:20:55.483 ************************************ 00:20:55.483 11:06:51 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:55.483 11:06:51 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:55.483 11:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:55.483 11:06:51 -- common/autotest_common.sh@10 -- # set +x 00:20:55.483 ************************************ 00:20:55.483 START TEST nvmf_aer 00:20:55.483 ************************************ 00:20:55.483 11:06:51 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:55.483 * Looking for test storage... 00:20:55.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:55.483 11:06:51 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.483 11:06:51 -- nvmf/common.sh@7 -- # uname -s 00:20:55.483 11:06:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.483 11:06:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.483 11:06:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.483 11:06:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.484 11:06:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.484 11:06:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.484 11:06:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.484 11:06:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.484 11:06:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.484 11:06:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.484 11:06:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.484 11:06:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.484 11:06:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.484 11:06:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.484 11:06:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.484 11:06:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.484 11:06:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.484 11:06:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.484 11:06:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.484 11:06:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.484 11:06:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.484 11:06:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.484 11:06:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.484 11:06:51 -- paths/export.sh@5 -- # export PATH 00:20:55.484 11:06:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.484 11:06:51 -- nvmf/common.sh@47 -- # : 0 00:20:55.484 11:06:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.484 11:06:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.484 11:06:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.484 11:06:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.484 11:06:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.484 11:06:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.484 11:06:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.484 11:06:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.484 11:06:51 -- host/aer.sh@11 -- # nvmftestinit 00:20:55.484 11:06:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:55.484 11:06:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.484 11:06:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:55.484 11:06:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:55.484 11:06:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:55.484 11:06:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.484 11:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.484 11:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.484 11:06:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:55.484 11:06:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:55.484 11:06:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.484 11:06:51 -- common/autotest_common.sh@10 -- # set +x 00:21:02.077 11:06:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:02.077 11:06:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.077 11:06:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.077 11:06:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.077 11:06:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.077 11:06:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.077 11:06:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.077 11:06:58 -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.077 11:06:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.077 11:06:58 -- nvmf/common.sh@296 -- # e810=() 00:21:02.077 11:06:58 -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.077 11:06:58 -- nvmf/common.sh@297 -- # x722=() 00:21:02.077 11:06:58 -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.077 11:06:58 -- nvmf/common.sh@298 -- # mlx=() 00:21:02.077 11:06:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.077 11:06:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.077 11:06:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.077 11:06:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.077 11:06:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.077 11:06:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.077 11:06:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:02.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:02.077 11:06:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.077 11:06:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:02.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:02.077 11:06:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.077 11:06:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.077 11:06:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.077 11:06:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:02.077 11:06:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.077 11:06:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:02.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:02.077 11:06:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.077 11:06:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.077 11:06:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.077 11:06:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:02.077 11:06:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.077 11:06:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:02.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:02.077 11:06:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.077 11:06:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:02.077 11:06:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:02.077 11:06:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:02.077 11:06:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:02.077 11:06:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.077 11:06:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.077 11:06:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.077 11:06:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.078 11:06:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.078 11:06:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.078 11:06:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.078 11:06:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.078 11:06:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.078 11:06:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.078 11:06:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.078 11:06:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.078 11:06:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.078 11:06:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.078 11:06:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.078 11:06:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.078 11:06:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.078 11:06:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.078 11:06:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.078 11:06:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:21:02.078 00:21:02.078 --- 10.0.0.2 ping statistics --- 00:21:02.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.078 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:21:02.078 11:06:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:21:02.078 00:21:02.078 --- 10.0.0.1 ping statistics --- 00:21:02.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.078 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:21:02.078 11:06:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.078 11:06:58 -- nvmf/common.sh@411 -- # return 0 00:21:02.078 11:06:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:02.078 11:06:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.078 11:06:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:02.078 11:06:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:02.078 11:06:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.078 11:06:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:02.078 11:06:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:02.078 11:06:58 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:02.078 11:06:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:02.078 11:06:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:02.078 11:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:02.078 11:06:58 -- nvmf/common.sh@470 -- # nvmfpid=405185 00:21:02.078 11:06:58 -- nvmf/common.sh@471 -- # waitforlisten 405185 00:21:02.078 11:06:58 -- common/autotest_common.sh@827 -- # '[' -z 405185 ']' 00:21:02.078 11:06:58 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.078 11:06:58 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:02.078 11:06:58 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.078 11:06:58 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:02.078 11:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:02.078 11:06:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.078 [2024-05-15 11:06:58.642761] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:21:02.078 [2024-05-15 11:06:58.642826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.078 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.078 [2024-05-15 11:06:58.712206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.338 [2024-05-15 11:06:58.786480] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.338 [2024-05-15 11:06:58.786516] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.338 [2024-05-15 11:06:58.786524] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.338 [2024-05-15 11:06:58.786531] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.338 [2024-05-15 11:06:58.786536] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.338 [2024-05-15 11:06:58.786611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.338 [2024-05-15 11:06:58.786729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.338 [2024-05-15 11:06:58.786883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.338 [2024-05-15 11:06:58.786884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.908 11:06:59 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:02.908 11:06:59 -- common/autotest_common.sh@860 -- # return 0 00:21:02.908 11:06:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:02.908 11:06:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.908 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 11:06:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.908 11:06:59 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.908 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.908 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 [2024-05-15 11:06:59.467190] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.908 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.908 11:06:59 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:02.908 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.908 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 Malloc0 00:21:02.908 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.908 11:06:59 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:02.908 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.908 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.908 11:06:59 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:02.908 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.908 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.908 11:06:59 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.908 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.908 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 [2024-05-15 11:06:59.526427] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:02.908 [2024-05-15 11:06:59.526652] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.908 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.908 11:06:59 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:02.908 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.908 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 [ 00:21:02.908 { 00:21:02.908 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:02.908 "subtype": "Discovery", 00:21:02.908 "listen_addresses": [], 00:21:02.908 "allow_any_host": true, 00:21:02.908 "hosts": [] 00:21:02.908 }, 00:21:02.908 { 00:21:02.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.908 "subtype": "NVMe", 00:21:02.908 "listen_addresses": [ 00:21:02.908 { 00:21:02.908 "trtype": "TCP", 00:21:02.908 "adrfam": "IPv4", 00:21:02.908 "traddr": "10.0.0.2", 00:21:02.908 "trsvcid": "4420" 00:21:02.908 } 00:21:02.908 ], 00:21:02.908 "allow_any_host": true, 00:21:02.908 "hosts": [], 00:21:02.908 "serial_number": "SPDK00000000000001", 00:21:02.908 "model_number": "SPDK bdev Controller", 00:21:02.908 "max_namespaces": 2, 00:21:02.908 "min_cntlid": 1, 00:21:02.908 "max_cntlid": 65519, 00:21:02.908 "namespaces": [ 00:21:02.908 { 00:21:02.908 "nsid": 1, 00:21:02.908 "bdev_name": "Malloc0", 00:21:02.908 "name": "Malloc0", 00:21:02.908 "nguid": "6AD4A45E04F2423DB6026A0298AD9E9C", 00:21:02.908 "uuid": "6ad4a45e-04f2-423d-b602-6a0298ad9e9c" 00:21:02.908 } 00:21:02.908 ] 00:21:02.908 } 00:21:02.908 ] 00:21:02.908 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.908 11:06:59 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:02.908 11:06:59 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:02.908 11:06:59 -- host/aer.sh@33 -- # aerpid=405330 00:21:02.908 11:06:59 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:02.908 11:06:59 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:02.908 11:06:59 -- common/autotest_common.sh@1261 -- # local i=0 00:21:02.908 11:06:59 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:02.908 11:06:59 -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:21:02.908 11:06:59 -- common/autotest_common.sh@1264 -- # i=1 00:21:02.908 11:06:59 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:21:03.169 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.169 11:06:59 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.169 11:06:59 -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:21:03.169 11:06:59 -- common/autotest_common.sh@1264 -- # i=2 00:21:03.169 11:06:59 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:21:03.169 11:06:59 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.169 11:06:59 -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:21:03.169 11:06:59 -- common/autotest_common.sh@1264 -- # i=3 00:21:03.169 11:06:59 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:21:03.429 11:06:59 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.430 11:06:59 -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.430 11:06:59 -- common/autotest_common.sh@1272 -- # return 0 00:21:03.430 11:06:59 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:03.430 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.430 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:03.430 Malloc1 00:21:03.430 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.430 11:06:59 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:03.430 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.430 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:03.430 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.430 11:06:59 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:03.430 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.430 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:03.430 Asynchronous Event Request test 00:21:03.430 Attaching to 10.0.0.2 00:21:03.430 Attached to 10.0.0.2 00:21:03.430 Registering asynchronous event callbacks... 00:21:03.430 Starting namespace attribute notice tests for all controllers... 00:21:03.430 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:03.430 aer_cb - Changed Namespace 00:21:03.430 Cleaning up... 00:21:03.430 [ 00:21:03.430 { 00:21:03.430 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:03.430 "subtype": "Discovery", 00:21:03.430 "listen_addresses": [], 00:21:03.430 "allow_any_host": true, 00:21:03.430 "hosts": [] 00:21:03.430 }, 00:21:03.430 { 00:21:03.430 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.430 "subtype": "NVMe", 00:21:03.430 "listen_addresses": [ 00:21:03.430 { 00:21:03.430 "trtype": "TCP", 00:21:03.430 "adrfam": "IPv4", 00:21:03.430 "traddr": "10.0.0.2", 00:21:03.430 "trsvcid": "4420" 00:21:03.430 } 00:21:03.430 ], 00:21:03.430 "allow_any_host": true, 00:21:03.430 "hosts": [], 00:21:03.430 "serial_number": "SPDK00000000000001", 00:21:03.430 "model_number": "SPDK bdev Controller", 00:21:03.430 "max_namespaces": 2, 00:21:03.430 "min_cntlid": 1, 00:21:03.430 "max_cntlid": 65519, 00:21:03.430 "namespaces": [ 00:21:03.430 { 00:21:03.430 "nsid": 1, 00:21:03.430 "bdev_name": "Malloc0", 00:21:03.430 "name": "Malloc0", 00:21:03.430 "nguid": "6AD4A45E04F2423DB6026A0298AD9E9C", 00:21:03.430 "uuid": "6ad4a45e-04f2-423d-b602-6a0298ad9e9c" 00:21:03.430 }, 00:21:03.430 { 00:21:03.430 "nsid": 2, 00:21:03.430 "bdev_name": "Malloc1", 00:21:03.430 "name": "Malloc1", 00:21:03.430 "nguid": "876451C5B6BD48DF938857BB87A71D97", 00:21:03.430 "uuid": "876451c5-b6bd-48df-9388-57bb87a71d97" 00:21:03.430 } 00:21:03.430 ] 00:21:03.430 } 00:21:03.430 ] 00:21:03.430 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.430 11:06:59 -- host/aer.sh@43 -- # wait 405330 00:21:03.430 11:06:59 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:03.430 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.430 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:03.430 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.430 11:06:59 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:03.430 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.430 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:03.430 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.430 11:06:59 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.430 11:06:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.430 11:06:59 -- common/autotest_common.sh@10 -- # set +x 00:21:03.430 11:06:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.430 11:06:59 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:03.430 11:06:59 -- host/aer.sh@51 -- # nvmftestfini 00:21:03.430 11:06:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:03.430 11:06:59 -- nvmf/common.sh@117 -- # sync 00:21:03.430 11:06:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.430 11:06:59 -- nvmf/common.sh@120 -- # set +e 00:21:03.430 11:06:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.430 11:06:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.430 rmmod nvme_tcp 00:21:03.430 rmmod nvme_fabrics 00:21:03.430 rmmod nvme_keyring 00:21:03.430 11:07:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.430 11:07:00 -- nvmf/common.sh@124 -- # set -e 00:21:03.430 11:07:00 -- nvmf/common.sh@125 -- # return 0 00:21:03.430 11:07:00 -- nvmf/common.sh@478 -- # '[' -n 405185 ']' 00:21:03.430 11:07:00 -- nvmf/common.sh@479 -- # killprocess 405185 00:21:03.430 11:07:00 -- common/autotest_common.sh@946 -- # '[' -z 405185 ']' 00:21:03.430 11:07:00 -- common/autotest_common.sh@950 -- # kill -0 405185 00:21:03.430 11:07:00 -- common/autotest_common.sh@951 -- # uname 00:21:03.430 11:07:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:03.430 11:07:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 405185 00:21:03.690 11:07:00 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:03.690 11:07:00 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:03.690 11:07:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 405185' 00:21:03.690 killing process with pid 405185 00:21:03.690 11:07:00 -- common/autotest_common.sh@965 -- # kill 405185 00:21:03.690 [2024-05-15 11:07:00.108668] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:03.690 11:07:00 -- common/autotest_common.sh@970 -- # wait 405185 00:21:03.690 11:07:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:03.690 11:07:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:03.691 11:07:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:03.691 11:07:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.691 11:07:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.691 11:07:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.691 11:07:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.691 11:07:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.230 11:07:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.230 00:21:06.230 real 0m10.693s 00:21:06.230 user 0m7.786s 00:21:06.230 sys 0m5.492s 00:21:06.230 11:07:02 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:06.230 11:07:02 -- common/autotest_common.sh@10 -- # set +x 00:21:06.230 ************************************ 00:21:06.230 END TEST nvmf_aer 00:21:06.230 ************************************ 00:21:06.230 11:07:02 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:06.230 11:07:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:06.230 11:07:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:06.230 11:07:02 -- common/autotest_common.sh@10 -- # set +x 00:21:06.230 ************************************ 00:21:06.230 START TEST nvmf_async_init 00:21:06.230 ************************************ 00:21:06.230 11:07:02 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:06.230 * Looking for test storage... 00:21:06.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:06.230 11:07:02 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.230 11:07:02 -- nvmf/common.sh@7 -- # uname -s 00:21:06.230 11:07:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.230 11:07:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.230 11:07:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.230 11:07:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.230 11:07:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.230 11:07:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.230 11:07:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.230 11:07:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.230 11:07:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.230 11:07:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.230 11:07:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.230 11:07:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.230 11:07:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.230 11:07:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.230 11:07:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.230 11:07:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.230 11:07:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.230 11:07:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.230 11:07:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.230 11:07:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.230 11:07:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.230 11:07:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.230 11:07:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.230 11:07:02 -- paths/export.sh@5 -- # export PATH 00:21:06.230 11:07:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.230 11:07:02 -- nvmf/common.sh@47 -- # : 0 00:21:06.230 11:07:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.230 11:07:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.230 11:07:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.230 11:07:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.231 11:07:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.231 11:07:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.231 11:07:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.231 11:07:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.231 11:07:02 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:06.231 11:07:02 -- host/async_init.sh@14 -- # null_block_size=512 00:21:06.231 11:07:02 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:06.231 11:07:02 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:06.231 11:07:02 -- host/async_init.sh@20 -- # tr -d - 00:21:06.231 11:07:02 -- host/async_init.sh@20 -- # uuidgen 00:21:06.231 11:07:02 -- host/async_init.sh@20 -- # nguid=4f03cd63f2094543b416af5ad11fea2a 00:21:06.231 11:07:02 -- host/async_init.sh@22 -- # nvmftestinit 00:21:06.231 11:07:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:06.231 11:07:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.231 11:07:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:06.231 11:07:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:06.231 11:07:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:06.231 11:07:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.231 11:07:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.231 11:07:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.231 11:07:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:06.231 11:07:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:06.231 11:07:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.231 11:07:02 -- common/autotest_common.sh@10 -- # set +x 00:21:12.813 11:07:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:12.813 11:07:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.813 11:07:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.813 11:07:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.813 11:07:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.813 11:07:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.813 11:07:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.813 11:07:08 -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.813 11:07:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.813 11:07:08 -- nvmf/common.sh@296 -- # e810=() 00:21:12.813 11:07:08 -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.813 11:07:08 -- nvmf/common.sh@297 -- # x722=() 00:21:12.813 11:07:08 -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.813 11:07:08 -- nvmf/common.sh@298 -- # mlx=() 00:21:12.813 11:07:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.813 11:07:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.813 11:07:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.813 11:07:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.813 11:07:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.813 11:07:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.813 11:07:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:12.813 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:12.813 11:07:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.813 11:07:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:12.813 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:12.813 11:07:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.813 11:07:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.813 11:07:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.813 11:07:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.813 11:07:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.813 11:07:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:12.813 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:12.813 11:07:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.813 11:07:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.813 11:07:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.813 11:07:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.813 11:07:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.813 11:07:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:12.813 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:12.813 11:07:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.813 11:07:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:12.813 11:07:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:12.813 11:07:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:12.813 11:07:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:12.813 11:07:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.813 11:07:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.813 11:07:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.813 11:07:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.813 11:07:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.813 11:07:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.813 11:07:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.813 11:07:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.813 11:07:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.813 11:07:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.813 11:07:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.813 11:07:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.813 11:07:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.813 11:07:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.813 11:07:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.813 11:07:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.813 11:07:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.813 11:07:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.813 11:07:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.813 11:07:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:21:12.813 00:21:12.813 --- 10.0.0.2 ping statistics --- 00:21:12.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.813 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:21:12.813 11:07:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:21:12.813 00:21:12.813 --- 10.0.0.1 ping statistics --- 00:21:12.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.813 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:21:12.813 11:07:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.813 11:07:09 -- nvmf/common.sh@411 -- # return 0 00:21:12.813 11:07:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:12.813 11:07:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.813 11:07:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:12.813 11:07:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:12.813 11:07:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.813 11:07:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:12.813 11:07:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:12.813 11:07:09 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:12.813 11:07:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:12.813 11:07:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:12.813 11:07:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.813 11:07:09 -- nvmf/common.sh@470 -- # nvmfpid=409531 00:21:12.813 11:07:09 -- nvmf/common.sh@471 -- # waitforlisten 409531 00:21:12.813 11:07:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:12.813 11:07:09 -- common/autotest_common.sh@827 -- # '[' -z 409531 ']' 00:21:12.813 11:07:09 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.813 11:07:09 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:12.813 11:07:09 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.813 11:07:09 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:12.813 11:07:09 -- common/autotest_common.sh@10 -- # set +x 00:21:12.813 [2024-05-15 11:07:09.300446] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:21:12.814 [2024-05-15 11:07:09.300512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.814 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.814 [2024-05-15 11:07:09.369773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.814 [2024-05-15 11:07:09.444073] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.814 [2024-05-15 11:07:09.444112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.814 [2024-05-15 11:07:09.444120] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.814 [2024-05-15 11:07:09.444127] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.814 [2024-05-15 11:07:09.444133] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.814 [2024-05-15 11:07:09.444151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.754 11:07:10 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:13.754 11:07:10 -- common/autotest_common.sh@860 -- # return 0 00:21:13.754 11:07:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:13.754 11:07:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 11:07:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.754 11:07:10 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 [2024-05-15 11:07:10.131246] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.754 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.754 11:07:10 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 null0 00:21:13.754 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.754 11:07:10 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.754 11:07:10 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.754 11:07:10 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4f03cd63f2094543b416af5ad11fea2a 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.754 11:07:10 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 [2024-05-15 11:07:10.171289] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:13.754 [2024-05-15 11:07:10.171483] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.754 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.754 11:07:10 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 nvme0n1 00:21:13.754 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.754 11:07:10 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:13.754 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.754 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.015 [ 00:21:14.015 { 00:21:14.015 "name": "nvme0n1", 00:21:14.015 "aliases": [ 00:21:14.015 "4f03cd63-f209-4543-b416-af5ad11fea2a" 00:21:14.015 ], 00:21:14.015 "product_name": "NVMe disk", 00:21:14.015 "block_size": 512, 00:21:14.015 "num_blocks": 2097152, 00:21:14.015 "uuid": "4f03cd63-f209-4543-b416-af5ad11fea2a", 00:21:14.015 "assigned_rate_limits": { 00:21:14.015 "rw_ios_per_sec": 0, 00:21:14.015 "rw_mbytes_per_sec": 0, 00:21:14.015 "r_mbytes_per_sec": 0, 00:21:14.015 "w_mbytes_per_sec": 0 00:21:14.015 }, 00:21:14.016 "claimed": false, 00:21:14.016 "zoned": false, 00:21:14.016 "supported_io_types": { 00:21:14.016 "read": true, 00:21:14.016 "write": true, 00:21:14.016 "unmap": false, 00:21:14.016 "write_zeroes": true, 00:21:14.016 "flush": true, 00:21:14.016 "reset": true, 00:21:14.016 "compare": true, 00:21:14.016 "compare_and_write": true, 00:21:14.016 "abort": true, 00:21:14.016 "nvme_admin": true, 00:21:14.016 "nvme_io": true 00:21:14.016 }, 00:21:14.016 "memory_domains": [ 00:21:14.016 { 00:21:14.016 "dma_device_id": "system", 00:21:14.016 "dma_device_type": 1 00:21:14.016 } 00:21:14.016 ], 00:21:14.016 "driver_specific": { 00:21:14.016 "nvme": [ 00:21:14.016 { 00:21:14.016 "trid": { 00:21:14.016 "trtype": "TCP", 00:21:14.016 "adrfam": "IPv4", 00:21:14.016 "traddr": "10.0.0.2", 00:21:14.016 "trsvcid": "4420", 00:21:14.016 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.016 }, 00:21:14.016 "ctrlr_data": { 00:21:14.016 "cntlid": 1, 00:21:14.016 "vendor_id": "0x8086", 00:21:14.016 "model_number": "SPDK bdev Controller", 00:21:14.016 "serial_number": "00000000000000000000", 00:21:14.016 "firmware_revision": "24.05", 00:21:14.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.016 "oacs": { 00:21:14.016 "security": 0, 00:21:14.016 "format": 0, 00:21:14.016 "firmware": 0, 00:21:14.016 "ns_manage": 0 00:21:14.016 }, 00:21:14.016 "multi_ctrlr": true, 00:21:14.016 "ana_reporting": false 00:21:14.016 }, 00:21:14.016 "vs": { 00:21:14.016 "nvme_version": "1.3" 00:21:14.016 }, 00:21:14.016 "ns_data": { 00:21:14.016 "id": 1, 00:21:14.016 "can_share": true 00:21:14.016 } 00:21:14.016 } 00:21:14.016 ], 00:21:14.016 "mp_policy": "active_passive" 00:21:14.016 } 00:21:14.016 } 00:21:14.016 ] 00:21:14.016 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.016 11:07:10 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:14.016 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 [2024-05-15 11:07:10.419979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.016 [2024-05-15 11:07:10.420039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851c00 (9): Bad file descriptor 00:21:14.016 [2024-05-15 11:07:10.551641] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:14.016 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.016 11:07:10 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:14.016 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 [ 00:21:14.016 { 00:21:14.016 "name": "nvme0n1", 00:21:14.016 "aliases": [ 00:21:14.016 "4f03cd63-f209-4543-b416-af5ad11fea2a" 00:21:14.016 ], 00:21:14.016 "product_name": "NVMe disk", 00:21:14.016 "block_size": 512, 00:21:14.016 "num_blocks": 2097152, 00:21:14.016 "uuid": "4f03cd63-f209-4543-b416-af5ad11fea2a", 00:21:14.016 "assigned_rate_limits": { 00:21:14.016 "rw_ios_per_sec": 0, 00:21:14.016 "rw_mbytes_per_sec": 0, 00:21:14.016 "r_mbytes_per_sec": 0, 00:21:14.016 "w_mbytes_per_sec": 0 00:21:14.016 }, 00:21:14.016 "claimed": false, 00:21:14.016 "zoned": false, 00:21:14.016 "supported_io_types": { 00:21:14.016 "read": true, 00:21:14.016 "write": true, 00:21:14.016 "unmap": false, 00:21:14.016 "write_zeroes": true, 00:21:14.016 "flush": true, 00:21:14.016 "reset": true, 00:21:14.016 "compare": true, 00:21:14.016 "compare_and_write": true, 00:21:14.016 "abort": true, 00:21:14.016 "nvme_admin": true, 00:21:14.016 "nvme_io": true 00:21:14.016 }, 00:21:14.016 "memory_domains": [ 00:21:14.016 { 00:21:14.016 "dma_device_id": "system", 00:21:14.016 "dma_device_type": 1 00:21:14.016 } 00:21:14.016 ], 00:21:14.016 "driver_specific": { 00:21:14.016 "nvme": [ 00:21:14.016 { 00:21:14.016 "trid": { 00:21:14.016 "trtype": "TCP", 00:21:14.016 "adrfam": "IPv4", 00:21:14.016 "traddr": "10.0.0.2", 00:21:14.016 "trsvcid": "4420", 00:21:14.016 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.016 }, 00:21:14.016 "ctrlr_data": { 00:21:14.016 "cntlid": 2, 00:21:14.016 "vendor_id": "0x8086", 00:21:14.016 "model_number": "SPDK bdev Controller", 00:21:14.016 "serial_number": "00000000000000000000", 00:21:14.016 "firmware_revision": "24.05", 00:21:14.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.016 "oacs": { 00:21:14.016 "security": 0, 00:21:14.016 "format": 0, 00:21:14.016 "firmware": 0, 00:21:14.016 "ns_manage": 0 00:21:14.016 }, 00:21:14.016 "multi_ctrlr": true, 00:21:14.016 "ana_reporting": false 00:21:14.016 }, 00:21:14.016 "vs": { 00:21:14.016 "nvme_version": "1.3" 00:21:14.016 }, 00:21:14.016 "ns_data": { 00:21:14.016 "id": 1, 00:21:14.016 "can_share": true 00:21:14.016 } 00:21:14.016 } 00:21:14.016 ], 00:21:14.016 "mp_policy": "active_passive" 00:21:14.016 } 00:21:14.016 } 00:21:14.016 ] 00:21:14.016 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.016 11:07:10 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.016 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.016 11:07:10 -- host/async_init.sh@53 -- # mktemp 00:21:14.016 11:07:10 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GtbwNzM7jT 00:21:14.016 11:07:10 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.016 11:07:10 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GtbwNzM7jT 00:21:14.016 11:07:10 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.016 11:07:10 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:14.016 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 [2024-05-15 11:07:10.604567] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.016 [2024-05-15 11:07:10.604672] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:14.016 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.016 11:07:10 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GtbwNzM7jT 00:21:14.016 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 [2024-05-15 11:07:10.612581] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.016 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.016 11:07:10 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GtbwNzM7jT 00:21:14.016 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.016 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.016 [2024-05-15 11:07:10.620605] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.016 [2024-05-15 11:07:10.620641] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:14.276 nvme0n1 00:21:14.276 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.276 11:07:10 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:14.276 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.276 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.276 [ 00:21:14.276 { 00:21:14.276 "name": "nvme0n1", 00:21:14.276 "aliases": [ 00:21:14.276 "4f03cd63-f209-4543-b416-af5ad11fea2a" 00:21:14.276 ], 00:21:14.276 "product_name": "NVMe disk", 00:21:14.276 "block_size": 512, 00:21:14.276 "num_blocks": 2097152, 00:21:14.276 "uuid": "4f03cd63-f209-4543-b416-af5ad11fea2a", 00:21:14.276 "assigned_rate_limits": { 00:21:14.276 "rw_ios_per_sec": 0, 00:21:14.276 "rw_mbytes_per_sec": 0, 00:21:14.276 "r_mbytes_per_sec": 0, 00:21:14.276 "w_mbytes_per_sec": 0 00:21:14.276 }, 00:21:14.276 "claimed": false, 00:21:14.276 "zoned": false, 00:21:14.276 "supported_io_types": { 00:21:14.276 "read": true, 00:21:14.276 "write": true, 00:21:14.276 "unmap": false, 00:21:14.276 "write_zeroes": true, 00:21:14.276 "flush": true, 00:21:14.276 "reset": true, 00:21:14.276 "compare": true, 00:21:14.276 "compare_and_write": true, 00:21:14.276 "abort": true, 00:21:14.276 "nvme_admin": true, 00:21:14.276 "nvme_io": true 00:21:14.276 }, 00:21:14.276 "memory_domains": [ 00:21:14.276 { 00:21:14.276 "dma_device_id": "system", 00:21:14.276 "dma_device_type": 1 00:21:14.276 } 00:21:14.276 ], 00:21:14.276 "driver_specific": { 00:21:14.276 "nvme": [ 00:21:14.276 { 00:21:14.276 "trid": { 00:21:14.276 "trtype": "TCP", 00:21:14.276 "adrfam": "IPv4", 00:21:14.276 "traddr": "10.0.0.2", 00:21:14.276 "trsvcid": "4421", 00:21:14.276 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.276 }, 00:21:14.276 "ctrlr_data": { 00:21:14.276 "cntlid": 3, 00:21:14.276 "vendor_id": "0x8086", 00:21:14.276 "model_number": "SPDK bdev Controller", 00:21:14.276 "serial_number": "00000000000000000000", 00:21:14.276 "firmware_revision": "24.05", 00:21:14.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.276 "oacs": { 00:21:14.276 "security": 0, 00:21:14.276 "format": 0, 00:21:14.276 "firmware": 0, 00:21:14.276 "ns_manage": 0 00:21:14.276 }, 00:21:14.276 "multi_ctrlr": true, 00:21:14.276 "ana_reporting": false 00:21:14.276 }, 00:21:14.276 "vs": { 00:21:14.276 "nvme_version": "1.3" 00:21:14.276 }, 00:21:14.276 "ns_data": { 00:21:14.276 "id": 1, 00:21:14.276 "can_share": true 00:21:14.276 } 00:21:14.276 } 00:21:14.276 ], 00:21:14.276 "mp_policy": "active_passive" 00:21:14.276 } 00:21:14.276 } 00:21:14.276 ] 00:21:14.276 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.276 11:07:10 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.276 11:07:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.276 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.276 11:07:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.276 11:07:10 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.GtbwNzM7jT 00:21:14.276 11:07:10 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:14.276 11:07:10 -- host/async_init.sh@78 -- # nvmftestfini 00:21:14.276 11:07:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:14.276 11:07:10 -- nvmf/common.sh@117 -- # sync 00:21:14.276 11:07:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.276 11:07:10 -- nvmf/common.sh@120 -- # set +e 00:21:14.277 11:07:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.277 11:07:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.277 rmmod nvme_tcp 00:21:14.277 rmmod nvme_fabrics 00:21:14.277 rmmod nvme_keyring 00:21:14.277 11:07:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.277 11:07:10 -- nvmf/common.sh@124 -- # set -e 00:21:14.277 11:07:10 -- nvmf/common.sh@125 -- # return 0 00:21:14.277 11:07:10 -- nvmf/common.sh@478 -- # '[' -n 409531 ']' 00:21:14.277 11:07:10 -- nvmf/common.sh@479 -- # killprocess 409531 00:21:14.277 11:07:10 -- common/autotest_common.sh@946 -- # '[' -z 409531 ']' 00:21:14.277 11:07:10 -- common/autotest_common.sh@950 -- # kill -0 409531 00:21:14.277 11:07:10 -- common/autotest_common.sh@951 -- # uname 00:21:14.277 11:07:10 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:14.277 11:07:10 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 409531 00:21:14.277 11:07:10 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:14.277 11:07:10 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:14.277 11:07:10 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 409531' 00:21:14.277 killing process with pid 409531 00:21:14.277 11:07:10 -- common/autotest_common.sh@965 -- # kill 409531 00:21:14.277 [2024-05-15 11:07:10.843202] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:14.277 [2024-05-15 11:07:10.843228] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:14.277 11:07:10 -- common/autotest_common.sh@970 -- # wait 409531 00:21:14.277 [2024-05-15 11:07:10.843237] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:14.537 11:07:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:14.537 11:07:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:14.537 11:07:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:14.537 11:07:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.537 11:07:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.537 11:07:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.537 11:07:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.537 11:07:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.575 11:07:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.575 00:21:16.575 real 0m10.626s 00:21:16.575 user 0m3.710s 00:21:16.575 sys 0m5.301s 00:21:16.575 11:07:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:16.575 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.575 ************************************ 00:21:16.575 END TEST nvmf_async_init 00:21:16.575 ************************************ 00:21:16.575 11:07:13 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:16.575 11:07:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:16.575 11:07:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:16.575 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.575 ************************************ 00:21:16.575 START TEST dma 00:21:16.575 ************************************ 00:21:16.576 11:07:13 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:16.576 * Looking for test storage... 00:21:16.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.576 11:07:13 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.576 11:07:13 -- nvmf/common.sh@7 -- # uname -s 00:21:16.576 11:07:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.576 11:07:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.576 11:07:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.576 11:07:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.576 11:07:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.576 11:07:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.576 11:07:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.576 11:07:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.576 11:07:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.576 11:07:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.837 11:07:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.837 11:07:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.837 11:07:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.837 11:07:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.837 11:07:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.837 11:07:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.837 11:07:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.837 11:07:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.837 11:07:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.837 11:07:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.837 11:07:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.837 11:07:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.837 11:07:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.837 11:07:13 -- paths/export.sh@5 -- # export PATH 00:21:16.837 11:07:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.837 11:07:13 -- nvmf/common.sh@47 -- # : 0 00:21:16.837 11:07:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.837 11:07:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.837 11:07:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.837 11:07:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.837 11:07:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.837 11:07:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.837 11:07:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.837 11:07:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.837 11:07:13 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:16.837 11:07:13 -- host/dma.sh@13 -- # exit 0 00:21:16.837 00:21:16.837 real 0m0.132s 00:21:16.837 user 0m0.071s 00:21:16.837 sys 0m0.070s 00:21:16.837 11:07:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:16.837 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.837 ************************************ 00:21:16.837 END TEST dma 00:21:16.837 ************************************ 00:21:16.837 11:07:13 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:16.837 11:07:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:16.837 11:07:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:16.837 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:21:16.837 ************************************ 00:21:16.837 START TEST nvmf_identify 00:21:16.837 ************************************ 00:21:16.837 11:07:13 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:16.837 * Looking for test storage... 00:21:16.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.837 11:07:13 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.837 11:07:13 -- nvmf/common.sh@7 -- # uname -s 00:21:16.837 11:07:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.837 11:07:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.837 11:07:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.837 11:07:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.837 11:07:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.837 11:07:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.837 11:07:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.837 11:07:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.837 11:07:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.837 11:07:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.838 11:07:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.838 11:07:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.838 11:07:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.838 11:07:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.838 11:07:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.838 11:07:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.838 11:07:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.838 11:07:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.838 11:07:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.838 11:07:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.838 11:07:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.838 11:07:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.838 11:07:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.838 11:07:13 -- paths/export.sh@5 -- # export PATH 00:21:16.838 11:07:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.838 11:07:13 -- nvmf/common.sh@47 -- # : 0 00:21:16.838 11:07:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.838 11:07:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.838 11:07:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.838 11:07:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.838 11:07:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.838 11:07:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.838 11:07:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.838 11:07:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.838 11:07:13 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.838 11:07:13 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.838 11:07:13 -- host/identify.sh@14 -- # nvmftestinit 00:21:16.838 11:07:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:16.838 11:07:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.838 11:07:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:16.838 11:07:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:16.838 11:07:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:16.838 11:07:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.838 11:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.838 11:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.838 11:07:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:16.838 11:07:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:16.838 11:07:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.838 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:21:24.984 11:07:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:24.984 11:07:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.984 11:07:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.984 11:07:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.984 11:07:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.984 11:07:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.984 11:07:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.984 11:07:20 -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.984 11:07:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.984 11:07:20 -- nvmf/common.sh@296 -- # e810=() 00:21:24.984 11:07:20 -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.984 11:07:20 -- nvmf/common.sh@297 -- # x722=() 00:21:24.984 11:07:20 -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.984 11:07:20 -- nvmf/common.sh@298 -- # mlx=() 00:21:24.984 11:07:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.984 11:07:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.984 11:07:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.984 11:07:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.984 11:07:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.984 11:07:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.984 11:07:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:24.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:24.984 11:07:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.984 11:07:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:24.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:24.984 11:07:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.984 11:07:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.984 11:07:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.984 11:07:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:24.984 11:07:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.984 11:07:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:24.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:24.984 11:07:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.984 11:07:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.984 11:07:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.984 11:07:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:24.984 11:07:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.984 11:07:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:24.984 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:24.984 11:07:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.984 11:07:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:24.984 11:07:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:24.984 11:07:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:24.984 11:07:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:24.984 11:07:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.984 11:07:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.984 11:07:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.984 11:07:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.984 11:07:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.984 11:07:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.984 11:07:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.984 11:07:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.984 11:07:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.984 11:07:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.985 11:07:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.985 11:07:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.985 11:07:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.985 11:07:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.985 11:07:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.985 11:07:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.985 11:07:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.985 11:07:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.985 11:07:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.985 11:07:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:21:24.985 00:21:24.985 --- 10.0.0.2 ping statistics --- 00:21:24.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.985 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:21:24.985 11:07:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:21:24.985 00:21:24.985 --- 10.0.0.1 ping statistics --- 00:21:24.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.985 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:21:24.985 11:07:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.985 11:07:20 -- nvmf/common.sh@411 -- # return 0 00:21:24.985 11:07:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:24.985 11:07:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.985 11:07:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:24.985 11:07:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:24.985 11:07:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.985 11:07:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:24.985 11:07:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:24.985 11:07:20 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:24.985 11:07:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:24.985 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 11:07:20 -- host/identify.sh@19 -- # nvmfpid=414041 00:21:24.985 11:07:20 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.985 11:07:20 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.985 11:07:20 -- host/identify.sh@23 -- # waitforlisten 414041 00:21:24.985 11:07:20 -- common/autotest_common.sh@827 -- # '[' -z 414041 ']' 00:21:24.985 11:07:20 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.985 11:07:20 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:24.985 11:07:20 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.985 11:07:20 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:24.985 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 [2024-05-15 11:07:20.538274] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:21:24.985 [2024-05-15 11:07:20.538321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.985 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.985 [2024-05-15 11:07:20.601512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.985 [2024-05-15 11:07:20.667066] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.985 [2024-05-15 11:07:20.667098] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.985 [2024-05-15 11:07:20.667106] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.985 [2024-05-15 11:07:20.667112] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.985 [2024-05-15 11:07:20.667118] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.985 [2024-05-15 11:07:20.667256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.985 [2024-05-15 11:07:20.667377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.985 [2024-05-15 11:07:20.667536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.985 [2024-05-15 11:07:20.667537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.985 11:07:21 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:24.985 11:07:21 -- common/autotest_common.sh@860 -- # return 0 00:21:24.985 11:07:21 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.985 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 [2024-05-15 11:07:21.328100] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.985 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.985 11:07:21 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:24.985 11:07:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 11:07:21 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:24.985 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 Malloc0 00:21:24.985 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.985 11:07:21 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.985 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.985 11:07:21 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:24.985 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.985 11:07:21 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.985 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 [2024-05-15 11:07:21.423383] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:24.985 [2024-05-15 11:07:21.423617] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.985 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.985 11:07:21 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:24.985 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.985 11:07:21 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:24.985 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.985 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 [ 00:21:24.985 { 00:21:24.985 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:24.985 "subtype": "Discovery", 00:21:24.985 "listen_addresses": [ 00:21:24.985 { 00:21:24.985 "trtype": "TCP", 00:21:24.985 "adrfam": "IPv4", 00:21:24.985 "traddr": "10.0.0.2", 00:21:24.985 "trsvcid": "4420" 00:21:24.985 } 00:21:24.985 ], 00:21:24.985 "allow_any_host": true, 00:21:24.985 "hosts": [] 00:21:24.985 }, 00:21:24.985 { 00:21:24.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.985 "subtype": "NVMe", 00:21:24.985 "listen_addresses": [ 00:21:24.985 { 00:21:24.985 "trtype": "TCP", 00:21:24.985 "adrfam": "IPv4", 00:21:24.985 "traddr": "10.0.0.2", 00:21:24.985 "trsvcid": "4420" 00:21:24.985 } 00:21:24.985 ], 00:21:24.985 "allow_any_host": true, 00:21:24.985 "hosts": [], 00:21:24.985 "serial_number": "SPDK00000000000001", 00:21:24.985 "model_number": "SPDK bdev Controller", 00:21:24.985 "max_namespaces": 32, 00:21:24.985 "min_cntlid": 1, 00:21:24.985 "max_cntlid": 65519, 00:21:24.985 "namespaces": [ 00:21:24.985 { 00:21:24.985 "nsid": 1, 00:21:24.985 "bdev_name": "Malloc0", 00:21:24.985 "name": "Malloc0", 00:21:24.985 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:24.985 "eui64": "ABCDEF0123456789", 00:21:24.985 "uuid": "c19b6dbf-2a5c-435e-be6f-eb3e6a4753c5" 00:21:24.985 } 00:21:24.985 ] 00:21:24.985 } 00:21:24.985 ] 00:21:24.985 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.985 11:07:21 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:24.985 [2024-05-15 11:07:21.485580] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:21:24.985 [2024-05-15 11:07:21.485647] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414299 ] 00:21:24.985 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.985 [2024-05-15 11:07:21.519190] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:24.985 [2024-05-15 11:07:21.519231] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:24.985 [2024-05-15 11:07:21.519236] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:24.985 [2024-05-15 11:07:21.519249] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:24.985 [2024-05-15 11:07:21.519257] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.985 [2024-05-15 11:07:21.522585] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:24.985 [2024-05-15 11:07:21.522616] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x177ac30 0 00:21:24.985 [2024-05-15 11:07:21.530555] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.985 [2024-05-15 11:07:21.530568] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.986 [2024-05-15 11:07:21.530573] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.986 [2024-05-15 11:07:21.530576] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.986 [2024-05-15 11:07:21.530610] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.530616] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.530620] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.530632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.986 [2024-05-15 11:07:21.530646] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.538556] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.538566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.538569] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538573] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.538586] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.986 [2024-05-15 11:07:21.538593] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:24.986 [2024-05-15 11:07:21.538598] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:24.986 [2024-05-15 11:07:21.538608] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538612] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538615] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.538623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.986 [2024-05-15 11:07:21.538635] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.538730] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.538736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.538739] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538743] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.538748] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:24.986 [2024-05-15 11:07:21.538755] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:24.986 [2024-05-15 11:07:21.538762] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538765] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538769] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.538775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.986 [2024-05-15 11:07:21.538785] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.538848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.538854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.538857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.538869] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:24.986 [2024-05-15 11:07:21.538877] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.986 [2024-05-15 11:07:21.538883] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538887] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538890] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.538897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.986 [2024-05-15 11:07:21.538907] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.538961] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.538968] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.538971] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538975] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.538980] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.986 [2024-05-15 11:07:21.538990] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.538997] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.539004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.986 [2024-05-15 11:07:21.539013] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.539075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.539082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.539085] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539089] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.539094] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:24.986 [2024-05-15 11:07:21.539099] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:24.986 [2024-05-15 11:07:21.539106] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.986 [2024-05-15 11:07:21.539211] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:24.986 [2024-05-15 11:07:21.539216] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.986 [2024-05-15 11:07:21.539223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539227] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.539237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.986 [2024-05-15 11:07:21.539247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.539308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.539317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.539320] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539324] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.539329] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.986 [2024-05-15 11:07:21.539338] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539342] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539346] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.539352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.986 [2024-05-15 11:07:21.539362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.539415] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.539422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.539425] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539429] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.539434] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.986 [2024-05-15 11:07:21.539438] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:24.986 [2024-05-15 11:07:21.539446] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:24.986 [2024-05-15 11:07:21.539453] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.986 [2024-05-15 11:07:21.539461] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539465] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.986 [2024-05-15 11:07:21.539472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.986 [2024-05-15 11:07:21.539481] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.986 [2024-05-15 11:07:21.539571] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.986 [2024-05-15 11:07:21.539578] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.986 [2024-05-15 11:07:21.539581] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539585] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177ac30): datao=0, datal=4096, cccid=0 00:21:24.986 [2024-05-15 11:07:21.539590] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e2980) on tqpair(0x177ac30): expected_datao=0, payload_size=4096 00:21:24.986 [2024-05-15 11:07:21.539594] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539612] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539617] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539690] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.986 [2024-05-15 11:07:21.539696] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.986 [2024-05-15 11:07:21.539700] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.986 [2024-05-15 11:07:21.539703] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.986 [2024-05-15 11:07:21.539711] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:24.987 [2024-05-15 11:07:21.539719] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:24.987 [2024-05-15 11:07:21.539723] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:24.987 [2024-05-15 11:07:21.539728] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:24.987 [2024-05-15 11:07:21.539733] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:24.987 [2024-05-15 11:07:21.539737] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:24.987 [2024-05-15 11:07:21.539745] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.987 [2024-05-15 11:07:21.539753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539761] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.539768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.987 [2024-05-15 11:07:21.539778] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.987 [2024-05-15 11:07:21.539840] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.987 [2024-05-15 11:07:21.539846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.987 [2024-05-15 11:07:21.539850] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539854] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2980) on tqpair=0x177ac30 00:21:24.987 [2024-05-15 11:07:21.539862] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539865] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.539875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.987 [2024-05-15 11:07:21.539881] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539884] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539888] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.539894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.987 [2024-05-15 11:07:21.539899] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539903] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539906] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.539912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.987 [2024-05-15 11:07:21.539918] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539921] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539925] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.539930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.987 [2024-05-15 11:07:21.539935] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.987 [2024-05-15 11:07:21.539946] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.987 [2024-05-15 11:07:21.539952] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.539956] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.539962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.987 [2024-05-15 11:07:21.539973] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2980, cid 0, qid 0 00:21:24.987 [2024-05-15 11:07:21.539978] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2ae0, cid 1, qid 0 00:21:24.987 [2024-05-15 11:07:21.539983] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2c40, cid 2, qid 0 00:21:24.987 [2024-05-15 11:07:21.539988] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:24.987 [2024-05-15 11:07:21.539992] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2f00, cid 4, qid 0 00:21:24.987 [2024-05-15 11:07:21.540092] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.987 [2024-05-15 11:07:21.540098] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.987 [2024-05-15 11:07:21.540101] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.540105] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2f00) on tqpair=0x177ac30 00:21:24.987 [2024-05-15 11:07:21.540111] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:24.987 [2024-05-15 11:07:21.540115] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:24.987 [2024-05-15 11:07:21.540125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.540130] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.540137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.987 [2024-05-15 11:07:21.540146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2f00, cid 4, qid 0 00:21:24.987 [2024-05-15 11:07:21.540213] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.987 [2024-05-15 11:07:21.540220] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.987 [2024-05-15 11:07:21.540223] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.540227] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177ac30): datao=0, datal=4096, cccid=4 00:21:24.987 [2024-05-15 11:07:21.540231] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e2f00) on tqpair(0x177ac30): expected_datao=0, payload_size=4096 00:21:24.987 [2024-05-15 11:07:21.540236] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.540246] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.540250] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580626] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.987 [2024-05-15 11:07:21.580639] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.987 [2024-05-15 11:07:21.580642] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580646] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2f00) on tqpair=0x177ac30 00:21:24.987 [2024-05-15 11:07:21.580660] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:24.987 [2024-05-15 11:07:21.580686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580690] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.580700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.987 [2024-05-15 11:07:21.580707] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580714] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.580721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.987 [2024-05-15 11:07:21.580735] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2f00, cid 4, qid 0 00:21:24.987 [2024-05-15 11:07:21.580741] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e3060, cid 5, qid 0 00:21:24.987 [2024-05-15 11:07:21.580835] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.987 [2024-05-15 11:07:21.580842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.987 [2024-05-15 11:07:21.580846] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580849] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177ac30): datao=0, datal=1024, cccid=4 00:21:24.987 [2024-05-15 11:07:21.580854] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e2f00) on tqpair(0x177ac30): expected_datao=0, payload_size=1024 00:21:24.987 [2024-05-15 11:07:21.580858] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580864] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580868] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580874] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.987 [2024-05-15 11:07:21.580880] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.987 [2024-05-15 11:07:21.580883] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.580887] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e3060) on tqpair=0x177ac30 00:21:24.987 [2024-05-15 11:07:21.626555] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.987 [2024-05-15 11:07:21.626564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.987 [2024-05-15 11:07:21.626568] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.626571] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2f00) on tqpair=0x177ac30 00:21:24.987 [2024-05-15 11:07:21.626582] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.626586] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177ac30) 00:21:24.987 [2024-05-15 11:07:21.626593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.987 [2024-05-15 11:07:21.626607] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2f00, cid 4, qid 0 00:21:24.987 [2024-05-15 11:07:21.626677] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.987 [2024-05-15 11:07:21.626684] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.987 [2024-05-15 11:07:21.626687] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.626691] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177ac30): datao=0, datal=3072, cccid=4 00:21:24.987 [2024-05-15 11:07:21.626695] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e2f00) on tqpair(0x177ac30): expected_datao=0, payload_size=3072 00:21:24.987 [2024-05-15 11:07:21.626700] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.626738] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.626742] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.987 [2024-05-15 11:07:21.626782] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.988 [2024-05-15 11:07:21.626788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.988 [2024-05-15 11:07:21.626794] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.988 [2024-05-15 11:07:21.626798] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2f00) on tqpair=0x177ac30 00:21:24.988 [2024-05-15 11:07:21.626807] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.988 [2024-05-15 11:07:21.626811] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177ac30) 00:21:24.988 [2024-05-15 11:07:21.626817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.988 [2024-05-15 11:07:21.626830] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2f00, cid 4, qid 0 00:21:24.988 [2024-05-15 11:07:21.626897] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.988 [2024-05-15 11:07:21.626904] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.988 [2024-05-15 11:07:21.626907] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.988 [2024-05-15 11:07:21.626911] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177ac30): datao=0, datal=8, cccid=4 00:21:24.988 [2024-05-15 11:07:21.626915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e2f00) on tqpair(0x177ac30): expected_datao=0, payload_size=8 00:21:24.988 [2024-05-15 11:07:21.626919] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.988 [2024-05-15 11:07:21.626926] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.988 [2024-05-15 11:07:21.626929] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.254 [2024-05-15 11:07:21.667591] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.254 [2024-05-15 11:07:21.667600] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.254 [2024-05-15 11:07:21.667604] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.254 [2024-05-15 11:07:21.667608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2f00) on tqpair=0x177ac30 00:21:25.254 ===================================================== 00:21:25.254 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:25.254 ===================================================== 00:21:25.254 Controller Capabilities/Features 00:21:25.254 ================================ 00:21:25.254 Vendor ID: 0000 00:21:25.254 Subsystem Vendor ID: 0000 00:21:25.254 Serial Number: .................... 00:21:25.254 Model Number: ........................................ 00:21:25.254 Firmware Version: 24.05 00:21:25.254 Recommended Arb Burst: 0 00:21:25.254 IEEE OUI Identifier: 00 00 00 00:21:25.254 Multi-path I/O 00:21:25.254 May have multiple subsystem ports: No 00:21:25.254 May have multiple controllers: No 00:21:25.254 Associated with SR-IOV VF: No 00:21:25.254 Max Data Transfer Size: 131072 00:21:25.254 Max Number of Namespaces: 0 00:21:25.254 Max Number of I/O Queues: 1024 00:21:25.254 NVMe Specification Version (VS): 1.3 00:21:25.254 NVMe Specification Version (Identify): 1.3 00:21:25.254 Maximum Queue Entries: 128 00:21:25.254 Contiguous Queues Required: Yes 00:21:25.254 Arbitration Mechanisms Supported 00:21:25.254 Weighted Round Robin: Not Supported 00:21:25.254 Vendor Specific: Not Supported 00:21:25.254 Reset Timeout: 15000 ms 00:21:25.254 Doorbell Stride: 4 bytes 00:21:25.254 NVM Subsystem Reset: Not Supported 00:21:25.254 Command Sets Supported 00:21:25.254 NVM Command Set: Supported 00:21:25.254 Boot Partition: Not Supported 00:21:25.254 Memory Page Size Minimum: 4096 bytes 00:21:25.254 Memory Page Size Maximum: 4096 bytes 00:21:25.254 Persistent Memory Region: Not Supported 00:21:25.254 Optional Asynchronous Events Supported 00:21:25.254 Namespace Attribute Notices: Not Supported 00:21:25.254 Firmware Activation Notices: Not Supported 00:21:25.254 ANA Change Notices: Not Supported 00:21:25.254 PLE Aggregate Log Change Notices: Not Supported 00:21:25.254 LBA Status Info Alert Notices: Not Supported 00:21:25.254 EGE Aggregate Log Change Notices: Not Supported 00:21:25.254 Normal NVM Subsystem Shutdown event: Not Supported 00:21:25.254 Zone Descriptor Change Notices: Not Supported 00:21:25.254 Discovery Log Change Notices: Supported 00:21:25.254 Controller Attributes 00:21:25.254 128-bit Host Identifier: Not Supported 00:21:25.254 Non-Operational Permissive Mode: Not Supported 00:21:25.254 NVM Sets: Not Supported 00:21:25.254 Read Recovery Levels: Not Supported 00:21:25.254 Endurance Groups: Not Supported 00:21:25.254 Predictable Latency Mode: Not Supported 00:21:25.254 Traffic Based Keep ALive: Not Supported 00:21:25.254 Namespace Granularity: Not Supported 00:21:25.254 SQ Associations: Not Supported 00:21:25.254 UUID List: Not Supported 00:21:25.254 Multi-Domain Subsystem: Not Supported 00:21:25.254 Fixed Capacity Management: Not Supported 00:21:25.254 Variable Capacity Management: Not Supported 00:21:25.254 Delete Endurance Group: Not Supported 00:21:25.254 Delete NVM Set: Not Supported 00:21:25.254 Extended LBA Formats Supported: Not Supported 00:21:25.254 Flexible Data Placement Supported: Not Supported 00:21:25.254 00:21:25.254 Controller Memory Buffer Support 00:21:25.254 ================================ 00:21:25.254 Supported: No 00:21:25.254 00:21:25.254 Persistent Memory Region Support 00:21:25.254 ================================ 00:21:25.254 Supported: No 00:21:25.254 00:21:25.254 Admin Command Set Attributes 00:21:25.254 ============================ 00:21:25.254 Security Send/Receive: Not Supported 00:21:25.254 Format NVM: Not Supported 00:21:25.254 Firmware Activate/Download: Not Supported 00:21:25.254 Namespace Management: Not Supported 00:21:25.254 Device Self-Test: Not Supported 00:21:25.254 Directives: Not Supported 00:21:25.254 NVMe-MI: Not Supported 00:21:25.254 Virtualization Management: Not Supported 00:21:25.254 Doorbell Buffer Config: Not Supported 00:21:25.254 Get LBA Status Capability: Not Supported 00:21:25.254 Command & Feature Lockdown Capability: Not Supported 00:21:25.254 Abort Command Limit: 1 00:21:25.254 Async Event Request Limit: 4 00:21:25.254 Number of Firmware Slots: N/A 00:21:25.254 Firmware Slot 1 Read-Only: N/A 00:21:25.254 Firmware Activation Without Reset: N/A 00:21:25.254 Multiple Update Detection Support: N/A 00:21:25.254 Firmware Update Granularity: No Information Provided 00:21:25.254 Per-Namespace SMART Log: No 00:21:25.254 Asymmetric Namespace Access Log Page: Not Supported 00:21:25.254 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:25.254 Command Effects Log Page: Not Supported 00:21:25.254 Get Log Page Extended Data: Supported 00:21:25.254 Telemetry Log Pages: Not Supported 00:21:25.254 Persistent Event Log Pages: Not Supported 00:21:25.254 Supported Log Pages Log Page: May Support 00:21:25.254 Commands Supported & Effects Log Page: Not Supported 00:21:25.254 Feature Identifiers & Effects Log Page:May Support 00:21:25.254 NVMe-MI Commands & Effects Log Page: May Support 00:21:25.254 Data Area 4 for Telemetry Log: Not Supported 00:21:25.254 Error Log Page Entries Supported: 128 00:21:25.254 Keep Alive: Not Supported 00:21:25.254 00:21:25.254 NVM Command Set Attributes 00:21:25.254 ========================== 00:21:25.254 Submission Queue Entry Size 00:21:25.254 Max: 1 00:21:25.254 Min: 1 00:21:25.254 Completion Queue Entry Size 00:21:25.254 Max: 1 00:21:25.254 Min: 1 00:21:25.254 Number of Namespaces: 0 00:21:25.254 Compare Command: Not Supported 00:21:25.254 Write Uncorrectable Command: Not Supported 00:21:25.254 Dataset Management Command: Not Supported 00:21:25.254 Write Zeroes Command: Not Supported 00:21:25.254 Set Features Save Field: Not Supported 00:21:25.254 Reservations: Not Supported 00:21:25.254 Timestamp: Not Supported 00:21:25.254 Copy: Not Supported 00:21:25.254 Volatile Write Cache: Not Present 00:21:25.254 Atomic Write Unit (Normal): 1 00:21:25.254 Atomic Write Unit (PFail): 1 00:21:25.254 Atomic Compare & Write Unit: 1 00:21:25.254 Fused Compare & Write: Supported 00:21:25.254 Scatter-Gather List 00:21:25.254 SGL Command Set: Supported 00:21:25.254 SGL Keyed: Supported 00:21:25.254 SGL Bit Bucket Descriptor: Not Supported 00:21:25.254 SGL Metadata Pointer: Not Supported 00:21:25.254 Oversized SGL: Not Supported 00:21:25.254 SGL Metadata Address: Not Supported 00:21:25.254 SGL Offset: Supported 00:21:25.254 Transport SGL Data Block: Not Supported 00:21:25.254 Replay Protected Memory Block: Not Supported 00:21:25.254 00:21:25.255 Firmware Slot Information 00:21:25.255 ========================= 00:21:25.255 Active slot: 0 00:21:25.255 00:21:25.255 00:21:25.255 Error Log 00:21:25.255 ========= 00:21:25.255 00:21:25.255 Active Namespaces 00:21:25.255 ================= 00:21:25.255 Discovery Log Page 00:21:25.255 ================== 00:21:25.255 Generation Counter: 2 00:21:25.255 Number of Records: 2 00:21:25.255 Record Format: 0 00:21:25.255 00:21:25.255 Discovery Log Entry 0 00:21:25.255 ---------------------- 00:21:25.255 Transport Type: 3 (TCP) 00:21:25.255 Address Family: 1 (IPv4) 00:21:25.255 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:25.255 Entry Flags: 00:21:25.255 Duplicate Returned Information: 1 00:21:25.255 Explicit Persistent Connection Support for Discovery: 1 00:21:25.255 Transport Requirements: 00:21:25.255 Secure Channel: Not Required 00:21:25.255 Port ID: 0 (0x0000) 00:21:25.255 Controller ID: 65535 (0xffff) 00:21:25.255 Admin Max SQ Size: 128 00:21:25.255 Transport Service Identifier: 4420 00:21:25.255 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:25.255 Transport Address: 10.0.0.2 00:21:25.255 Discovery Log Entry 1 00:21:25.255 ---------------------- 00:21:25.255 Transport Type: 3 (TCP) 00:21:25.255 Address Family: 1 (IPv4) 00:21:25.255 Subsystem Type: 2 (NVM Subsystem) 00:21:25.255 Entry Flags: 00:21:25.255 Duplicate Returned Information: 0 00:21:25.255 Explicit Persistent Connection Support for Discovery: 0 00:21:25.255 Transport Requirements: 00:21:25.255 Secure Channel: Not Required 00:21:25.255 Port ID: 0 (0x0000) 00:21:25.255 Controller ID: 65535 (0xffff) 00:21:25.255 Admin Max SQ Size: 128 00:21:25.255 Transport Service Identifier: 4420 00:21:25.255 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:25.255 Transport Address: 10.0.0.2 [2024-05-15 11:07:21.667695] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:25.255 [2024-05-15 11:07:21.667708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.255 [2024-05-15 11:07:21.667714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.255 [2024-05-15 11:07:21.667720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.255 [2024-05-15 11:07:21.667726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.255 [2024-05-15 11:07:21.667734] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667738] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667741] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.667749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.667761] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.667821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.667827] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.667831] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667835] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.667842] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667846] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667850] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.667858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.667871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.667943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.667949] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.667953] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667957] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.667962] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:25.255 [2024-05-15 11:07:21.667966] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:25.255 [2024-05-15 11:07:21.667975] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667979] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.667983] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.667989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.667999] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.668056] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.668062] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.668066] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668069] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.668080] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668084] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668087] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.668094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.668104] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.668167] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.668173] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.668176] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668180] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.668190] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668197] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.668204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.668213] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.668279] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.668285] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.668289] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668292] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.668305] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668309] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668312] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.668319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.668328] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.668383] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.668389] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.668393] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668396] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.668406] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668414] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.668420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.668430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.668498] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.668505] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.668508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668512] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.668522] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668526] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668529] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.255 [2024-05-15 11:07:21.668536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.255 [2024-05-15 11:07:21.668550] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.255 [2024-05-15 11:07:21.668608] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.255 [2024-05-15 11:07:21.668614] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.255 [2024-05-15 11:07:21.668618] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668621] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.255 [2024-05-15 11:07:21.668631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.255 [2024-05-15 11:07:21.668635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668638] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.668645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.668655] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.668715] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.668721] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.668725] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668728] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.668738] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668743] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668747] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.668754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.668763] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.668815] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.668821] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.668825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668828] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.668838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668842] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668846] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.668852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.668862] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.668922] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.668928] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.668931] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668935] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.668945] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668949] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.668952] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.668959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.668969] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669032] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669045] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669055] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669059] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669063] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669079] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669142] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669148] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669151] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669155] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669165] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669169] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669251] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669257] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669260] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669264] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669274] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669278] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669281] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669297] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669354] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669361] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669364] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669368] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669378] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669401] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669470] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669477] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669487] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669491] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669494] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669510] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669571] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669577] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669581] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669584] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669594] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669602] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669620] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669674] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669680] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669684] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669687] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669697] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669701] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669705] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669721] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669790] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669796] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669799] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669803] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669813] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669816] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669820] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.256 [2024-05-15 11:07:21.669827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.256 [2024-05-15 11:07:21.669836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.256 [2024-05-15 11:07:21.669888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.256 [2024-05-15 11:07:21.669894] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.256 [2024-05-15 11:07:21.669897] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669901] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.256 [2024-05-15 11:07:21.669911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669915] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.256 [2024-05-15 11:07:21.669918] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.257 [2024-05-15 11:07:21.669925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.669934] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.257 [2024-05-15 11:07:21.670000] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.670006] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.670009] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670013] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.257 [2024-05-15 11:07:21.670023] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670027] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670030] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.257 [2024-05-15 11:07:21.670037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.670048] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.257 [2024-05-15 11:07:21.670125] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.670131] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.670134] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670138] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.257 [2024-05-15 11:07:21.670148] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670151] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670155] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.257 [2024-05-15 11:07:21.670162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.670171] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.257 [2024-05-15 11:07:21.670231] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.670237] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.670241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670244] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.257 [2024-05-15 11:07:21.670254] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670258] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670262] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.257 [2024-05-15 11:07:21.670268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.670277] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.257 [2024-05-15 11:07:21.670351] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.670357] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.670361] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670364] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.257 [2024-05-15 11:07:21.670374] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670378] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.257 [2024-05-15 11:07:21.670388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.670398] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.257 [2024-05-15 11:07:21.670452] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.670458] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.670462] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670465] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.257 [2024-05-15 11:07:21.670475] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670479] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.670483] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.257 [2024-05-15 11:07:21.670489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.670502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.257 [2024-05-15 11:07:21.674553] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.674560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.674564] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.674567] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.257 [2024-05-15 11:07:21.674578] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.674582] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.674585] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177ac30) 00:21:25.257 [2024-05-15 11:07:21.674592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.674602] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e2da0, cid 3, qid 0 00:21:25.257 [2024-05-15 11:07:21.674685] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.674691] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.674694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.674698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e2da0) on tqpair=0x177ac30 00:21:25.257 [2024-05-15 11:07:21.674706] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:25.257 00:21:25.257 11:07:21 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:25.257 [2024-05-15 11:07:21.712616] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:21:25.257 [2024-05-15 11:07:21.712659] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414390 ] 00:21:25.257 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.257 [2024-05-15 11:07:21.744067] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:25.257 [2024-05-15 11:07:21.744110] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:25.257 [2024-05-15 11:07:21.744115] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:25.257 [2024-05-15 11:07:21.744131] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:25.257 [2024-05-15 11:07:21.744138] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:25.257 [2024-05-15 11:07:21.747569] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:25.257 [2024-05-15 11:07:21.747595] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e37c30 0 00:21:25.257 [2024-05-15 11:07:21.755554] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:25.257 [2024-05-15 11:07:21.755563] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:25.257 [2024-05-15 11:07:21.755568] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:25.257 [2024-05-15 11:07:21.755571] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:25.257 [2024-05-15 11:07:21.755600] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.755605] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.755612] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.257 [2024-05-15 11:07:21.755624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:25.257 [2024-05-15 11:07:21.755639] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.257 [2024-05-15 11:07:21.763555] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.763564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.763567] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.763572] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.257 [2024-05-15 11:07:21.763581] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:25.257 [2024-05-15 11:07:21.763587] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:25.257 [2024-05-15 11:07:21.763592] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:25.257 [2024-05-15 11:07:21.763602] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.763606] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.763610] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.257 [2024-05-15 11:07:21.763617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.763629] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.257 [2024-05-15 11:07:21.763798] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.257 [2024-05-15 11:07:21.763805] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.257 [2024-05-15 11:07:21.763808] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.763812] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.257 [2024-05-15 11:07:21.763817] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:25.257 [2024-05-15 11:07:21.763824] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:25.257 [2024-05-15 11:07:21.763831] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.763834] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.257 [2024-05-15 11:07:21.763838] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.257 [2024-05-15 11:07:21.763845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.257 [2024-05-15 11:07:21.763854] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.257 [2024-05-15 11:07:21.764046] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.258 [2024-05-15 11:07:21.764052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.258 [2024-05-15 11:07:21.764056] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764059] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.258 [2024-05-15 11:07:21.764065] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:25.258 [2024-05-15 11:07:21.764073] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:25.258 [2024-05-15 11:07:21.764079] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764083] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764086] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.258 [2024-05-15 11:07:21.764095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.258 [2024-05-15 11:07:21.764105] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.258 [2024-05-15 11:07:21.764302] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.258 [2024-05-15 11:07:21.764308] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.258 [2024-05-15 11:07:21.764312] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764315] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.258 [2024-05-15 11:07:21.764321] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:25.258 [2024-05-15 11:07:21.764330] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764333] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764337] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.258 [2024-05-15 11:07:21.764343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.258 [2024-05-15 11:07:21.764353] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.258 [2024-05-15 11:07:21.764564] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.258 [2024-05-15 11:07:21.764570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.258 [2024-05-15 11:07:21.764574] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764577] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.258 [2024-05-15 11:07:21.764582] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:25.258 [2024-05-15 11:07:21.764587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:25.258 [2024-05-15 11:07:21.764594] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:25.258 [2024-05-15 11:07:21.764699] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:25.258 [2024-05-15 11:07:21.764703] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:25.258 [2024-05-15 11:07:21.764710] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764714] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764717] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.258 [2024-05-15 11:07:21.764724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.258 [2024-05-15 11:07:21.764734] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.258 [2024-05-15 11:07:21.764950] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.258 [2024-05-15 11:07:21.764957] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.258 [2024-05-15 11:07:21.764960] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764964] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.258 [2024-05-15 11:07:21.764969] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:25.258 [2024-05-15 11:07:21.764978] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764982] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.764987] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.258 [2024-05-15 11:07:21.764994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.258 [2024-05-15 11:07:21.765003] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.258 [2024-05-15 11:07:21.765202] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.258 [2024-05-15 11:07:21.765208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.258 [2024-05-15 11:07:21.765211] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.765215] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.258 [2024-05-15 11:07:21.765220] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:25.258 [2024-05-15 11:07:21.765224] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:25.258 [2024-05-15 11:07:21.765232] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:25.258 [2024-05-15 11:07:21.765243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:25.258 [2024-05-15 11:07:21.765251] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.765254] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.258 [2024-05-15 11:07:21.765261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.258 [2024-05-15 11:07:21.765271] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.258 [2024-05-15 11:07:21.765482] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.258 [2024-05-15 11:07:21.765489] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.258 [2024-05-15 11:07:21.765492] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.765496] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=4096, cccid=0 00:21:25.258 [2024-05-15 11:07:21.765501] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9f980) on tqpair(0x1e37c30): expected_datao=0, payload_size=4096 00:21:25.258 [2024-05-15 11:07:21.765505] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.765516] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.765520] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.806737] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.258 [2024-05-15 11:07:21.806746] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.258 [2024-05-15 11:07:21.806750] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.806754] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.258 [2024-05-15 11:07:21.806762] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:25.258 [2024-05-15 11:07:21.806767] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:25.258 [2024-05-15 11:07:21.806771] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:25.258 [2024-05-15 11:07:21.806775] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:25.258 [2024-05-15 11:07:21.806780] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:25.258 [2024-05-15 11:07:21.806785] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:25.258 [2024-05-15 11:07:21.806796] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:25.258 [2024-05-15 11:07:21.806805] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.806809] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.258 [2024-05-15 11:07:21.806813] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.258 [2024-05-15 11:07:21.806820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:25.258 [2024-05-15 11:07:21.806831] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.258 [2024-05-15 11:07:21.807005] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.258 [2024-05-15 11:07:21.807012] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.258 [2024-05-15 11:07:21.807015] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807019] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9f980) on tqpair=0x1e37c30 00:21:25.259 [2024-05-15 11:07:21.807026] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807030] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807033] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.807039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.259 [2024-05-15 11:07:21.807046] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807049] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807053] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.807058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.259 [2024-05-15 11:07:21.807065] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807068] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807072] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.807077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.259 [2024-05-15 11:07:21.807083] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807087] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807090] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.807096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.259 [2024-05-15 11:07:21.807100] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.807110] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.807117] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.807127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.259 [2024-05-15 11:07:21.807138] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9f980, cid 0, qid 0 00:21:25.259 [2024-05-15 11:07:21.807144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9fae0, cid 1, qid 0 00:21:25.259 [2024-05-15 11:07:21.807150] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9fc40, cid 2, qid 0 00:21:25.259 [2024-05-15 11:07:21.807155] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9fda0, cid 3, qid 0 00:21:25.259 [2024-05-15 11:07:21.807160] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9ff00, cid 4, qid 0 00:21:25.259 [2024-05-15 11:07:21.807332] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.259 [2024-05-15 11:07:21.807338] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.259 [2024-05-15 11:07:21.807342] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807345] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9ff00) on tqpair=0x1e37c30 00:21:25.259 [2024-05-15 11:07:21.807351] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:25.259 [2024-05-15 11:07:21.807356] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.807365] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.807372] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.807378] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807382] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.807385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.807392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:25.259 [2024-05-15 11:07:21.807402] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9ff00, cid 4, qid 0 00:21:25.259 [2024-05-15 11:07:21.811553] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.259 [2024-05-15 11:07:21.811560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.259 [2024-05-15 11:07:21.811564] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.811568] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9ff00) on tqpair=0x1e37c30 00:21:25.259 [2024-05-15 11:07:21.811621] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.811630] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.811637] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.811641] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.811648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.259 [2024-05-15 11:07:21.811658] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9ff00, cid 4, qid 0 00:21:25.259 [2024-05-15 11:07:21.811821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.259 [2024-05-15 11:07:21.811828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.259 [2024-05-15 11:07:21.811832] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.811835] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=4096, cccid=4 00:21:25.259 [2024-05-15 11:07:21.811840] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9ff00) on tqpair(0x1e37c30): expected_datao=0, payload_size=4096 00:21:25.259 [2024-05-15 11:07:21.811844] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.811850] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.811856] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812040] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.259 [2024-05-15 11:07:21.812047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.259 [2024-05-15 11:07:21.812050] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812054] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9ff00) on tqpair=0x1e37c30 00:21:25.259 [2024-05-15 11:07:21.812065] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:25.259 [2024-05-15 11:07:21.812076] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.812085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.812092] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812095] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.812102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.259 [2024-05-15 11:07:21.812112] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9ff00, cid 4, qid 0 00:21:25.259 [2024-05-15 11:07:21.812315] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.259 [2024-05-15 11:07:21.812321] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.259 [2024-05-15 11:07:21.812325] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812328] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=4096, cccid=4 00:21:25.259 [2024-05-15 11:07:21.812333] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9ff00) on tqpair(0x1e37c30): expected_datao=0, payload_size=4096 00:21:25.259 [2024-05-15 11:07:21.812337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812343] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812347] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812548] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.259 [2024-05-15 11:07:21.812554] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.259 [2024-05-15 11:07:21.812558] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812562] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9ff00) on tqpair=0x1e37c30 00:21:25.259 [2024-05-15 11:07:21.812573] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.812582] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:25.259 [2024-05-15 11:07:21.812589] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812592] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e37c30) 00:21:25.259 [2024-05-15 11:07:21.812599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.259 [2024-05-15 11:07:21.812609] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9ff00, cid 4, qid 0 00:21:25.259 [2024-05-15 11:07:21.812775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.259 [2024-05-15 11:07:21.812782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.259 [2024-05-15 11:07:21.812786] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812789] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=4096, cccid=4 00:21:25.259 [2024-05-15 11:07:21.812795] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9ff00) on tqpair(0x1e37c30): expected_datao=0, payload_size=4096 00:21:25.259 [2024-05-15 11:07:21.812800] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812806] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.812810] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.813048] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.259 [2024-05-15 11:07:21.813055] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.259 [2024-05-15 11:07:21.813058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.259 [2024-05-15 11:07:21.813062] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9ff00) on tqpair=0x1e37c30 00:21:25.259 [2024-05-15 11:07:21.813069] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:25.260 [2024-05-15 11:07:21.813077] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:25.260 [2024-05-15 11:07:21.813085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:25.260 [2024-05-15 11:07:21.813091] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:25.260 [2024-05-15 11:07:21.813096] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:25.260 [2024-05-15 11:07:21.813101] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:25.260 [2024-05-15 11:07:21.813105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:25.260 [2024-05-15 11:07:21.813110] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:25.260 [2024-05-15 11:07:21.813125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813130] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.813136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.813143] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813147] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813150] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.813156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.260 [2024-05-15 11:07:21.813169] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9ff00, cid 4, qid 0 00:21:25.260 [2024-05-15 11:07:21.813174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea0060, cid 5, qid 0 00:21:25.260 [2024-05-15 11:07:21.813342] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.813348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.260 [2024-05-15 11:07:21.813352] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813355] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9ff00) on tqpair=0x1e37c30 00:21:25.260 [2024-05-15 11:07:21.813363] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.813369] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.260 [2024-05-15 11:07:21.813372] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813375] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea0060) on tqpair=0x1e37c30 00:21:25.260 [2024-05-15 11:07:21.813387] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813390] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.813397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.813406] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea0060, cid 5, qid 0 00:21:25.260 [2024-05-15 11:07:21.813641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.813647] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.260 [2024-05-15 11:07:21.813650] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813654] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea0060) on tqpair=0x1e37c30 00:21:25.260 [2024-05-15 11:07:21.813664] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813668] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.813674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.813683] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea0060, cid 5, qid 0 00:21:25.260 [2024-05-15 11:07:21.813868] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.813874] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.260 [2024-05-15 11:07:21.813878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813882] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea0060) on tqpair=0x1e37c30 00:21:25.260 [2024-05-15 11:07:21.813891] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.813895] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.813901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.813910] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea0060, cid 5, qid 0 00:21:25.260 [2024-05-15 11:07:21.814098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.814104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.260 [2024-05-15 11:07:21.814107] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814111] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea0060) on tqpair=0x1e37c30 00:21:25.260 [2024-05-15 11:07:21.814122] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814126] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.814133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.814140] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814143] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.814150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.814156] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814160] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.814166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.814173] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814179] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e37c30) 00:21:25.260 [2024-05-15 11:07:21.814185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.260 [2024-05-15 11:07:21.814196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea0060, cid 5, qid 0 00:21:25.260 [2024-05-15 11:07:21.814201] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9ff00, cid 4, qid 0 00:21:25.260 [2024-05-15 11:07:21.814205] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea01c0, cid 6, qid 0 00:21:25.260 [2024-05-15 11:07:21.814210] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea0320, cid 7, qid 0 00:21:25.260 [2024-05-15 11:07:21.814450] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.260 [2024-05-15 11:07:21.814456] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.260 [2024-05-15 11:07:21.814459] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814463] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=8192, cccid=5 00:21:25.260 [2024-05-15 11:07:21.814467] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea0060) on tqpair(0x1e37c30): expected_datao=0, payload_size=8192 00:21:25.260 [2024-05-15 11:07:21.814471] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814537] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814542] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814551] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.260 [2024-05-15 11:07:21.814557] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.260 [2024-05-15 11:07:21.814560] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814564] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=512, cccid=4 00:21:25.260 [2024-05-15 11:07:21.814568] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9ff00) on tqpair(0x1e37c30): expected_datao=0, payload_size=512 00:21:25.260 [2024-05-15 11:07:21.814573] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814579] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814583] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814588] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.260 [2024-05-15 11:07:21.814594] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.260 [2024-05-15 11:07:21.814597] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814601] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=512, cccid=6 00:21:25.260 [2024-05-15 11:07:21.814605] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea01c0) on tqpair(0x1e37c30): expected_datao=0, payload_size=512 00:21:25.260 [2024-05-15 11:07:21.814609] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814616] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814619] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814625] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:25.260 [2024-05-15 11:07:21.814631] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:25.260 [2024-05-15 11:07:21.814634] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814637] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e37c30): datao=0, datal=4096, cccid=7 00:21:25.260 [2024-05-15 11:07:21.814642] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea0320) on tqpair(0x1e37c30): expected_datao=0, payload_size=4096 00:21:25.260 [2024-05-15 11:07:21.814646] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814659] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814663] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814670] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.814675] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.260 [2024-05-15 11:07:21.814679] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814682] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea0060) on tqpair=0x1e37c30 00:21:25.260 [2024-05-15 11:07:21.814695] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.814701] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.260 [2024-05-15 11:07:21.814705] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.260 [2024-05-15 11:07:21.814708] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9ff00) on tqpair=0x1e37c30 00:21:25.260 [2024-05-15 11:07:21.814718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.260 [2024-05-15 11:07:21.814724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.261 [2024-05-15 11:07:21.814727] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.261 [2024-05-15 11:07:21.814731] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea01c0) on tqpair=0x1e37c30 00:21:25.261 [2024-05-15 11:07:21.814740] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.261 [2024-05-15 11:07:21.814746] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.261 [2024-05-15 11:07:21.814749] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.261 [2024-05-15 11:07:21.814753] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea0320) on tqpair=0x1e37c30 00:21:25.261 ===================================================== 00:21:25.261 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.261 ===================================================== 00:21:25.261 Controller Capabilities/Features 00:21:25.261 ================================ 00:21:25.261 Vendor ID: 8086 00:21:25.261 Subsystem Vendor ID: 8086 00:21:25.261 Serial Number: SPDK00000000000001 00:21:25.261 Model Number: SPDK bdev Controller 00:21:25.261 Firmware Version: 24.05 00:21:25.261 Recommended Arb Burst: 6 00:21:25.261 IEEE OUI Identifier: e4 d2 5c 00:21:25.261 Multi-path I/O 00:21:25.261 May have multiple subsystem ports: Yes 00:21:25.261 May have multiple controllers: Yes 00:21:25.261 Associated with SR-IOV VF: No 00:21:25.261 Max Data Transfer Size: 131072 00:21:25.261 Max Number of Namespaces: 32 00:21:25.261 Max Number of I/O Queues: 127 00:21:25.261 NVMe Specification Version (VS): 1.3 00:21:25.261 NVMe Specification Version (Identify): 1.3 00:21:25.261 Maximum Queue Entries: 128 00:21:25.261 Contiguous Queues Required: Yes 00:21:25.261 Arbitration Mechanisms Supported 00:21:25.261 Weighted Round Robin: Not Supported 00:21:25.261 Vendor Specific: Not Supported 00:21:25.261 Reset Timeout: 15000 ms 00:21:25.261 Doorbell Stride: 4 bytes 00:21:25.261 NVM Subsystem Reset: Not Supported 00:21:25.261 Command Sets Supported 00:21:25.261 NVM Command Set: Supported 00:21:25.261 Boot Partition: Not Supported 00:21:25.261 Memory Page Size Minimum: 4096 bytes 00:21:25.261 Memory Page Size Maximum: 4096 bytes 00:21:25.261 Persistent Memory Region: Not Supported 00:21:25.261 Optional Asynchronous Events Supported 00:21:25.261 Namespace Attribute Notices: Supported 00:21:25.261 Firmware Activation Notices: Not Supported 00:21:25.261 ANA Change Notices: Not Supported 00:21:25.261 PLE Aggregate Log Change Notices: Not Supported 00:21:25.261 LBA Status Info Alert Notices: Not Supported 00:21:25.261 EGE Aggregate Log Change Notices: Not Supported 00:21:25.261 Normal NVM Subsystem Shutdown event: Not Supported 00:21:25.261 Zone Descriptor Change Notices: Not Supported 00:21:25.261 Discovery Log Change Notices: Not Supported 00:21:25.261 Controller Attributes 00:21:25.261 128-bit Host Identifier: Supported 00:21:25.261 Non-Operational Permissive Mode: Not Supported 00:21:25.261 NVM Sets: Not Supported 00:21:25.261 Read Recovery Levels: Not Supported 00:21:25.261 Endurance Groups: Not Supported 00:21:25.261 Predictable Latency Mode: Not Supported 00:21:25.261 Traffic Based Keep ALive: Not Supported 00:21:25.261 Namespace Granularity: Not Supported 00:21:25.261 SQ Associations: Not Supported 00:21:25.261 UUID List: Not Supported 00:21:25.261 Multi-Domain Subsystem: Not Supported 00:21:25.261 Fixed Capacity Management: Not Supported 00:21:25.261 Variable Capacity Management: Not Supported 00:21:25.261 Delete Endurance Group: Not Supported 00:21:25.261 Delete NVM Set: Not Supported 00:21:25.261 Extended LBA Formats Supported: Not Supported 00:21:25.261 Flexible Data Placement Supported: Not Supported 00:21:25.261 00:21:25.261 Controller Memory Buffer Support 00:21:25.261 ================================ 00:21:25.261 Supported: No 00:21:25.261 00:21:25.261 Persistent Memory Region Support 00:21:25.261 ================================ 00:21:25.261 Supported: No 00:21:25.261 00:21:25.261 Admin Command Set Attributes 00:21:25.261 ============================ 00:21:25.261 Security Send/Receive: Not Supported 00:21:25.261 Format NVM: Not Supported 00:21:25.261 Firmware Activate/Download: Not Supported 00:21:25.261 Namespace Management: Not Supported 00:21:25.261 Device Self-Test: Not Supported 00:21:25.261 Directives: Not Supported 00:21:25.261 NVMe-MI: Not Supported 00:21:25.261 Virtualization Management: Not Supported 00:21:25.261 Doorbell Buffer Config: Not Supported 00:21:25.261 Get LBA Status Capability: Not Supported 00:21:25.261 Command & Feature Lockdown Capability: Not Supported 00:21:25.261 Abort Command Limit: 4 00:21:25.261 Async Event Request Limit: 4 00:21:25.261 Number of Firmware Slots: N/A 00:21:25.261 Firmware Slot 1 Read-Only: N/A 00:21:25.261 Firmware Activation Without Reset: N/A 00:21:25.261 Multiple Update Detection Support: N/A 00:21:25.261 Firmware Update Granularity: No Information Provided 00:21:25.261 Per-Namespace SMART Log: No 00:21:25.261 Asymmetric Namespace Access Log Page: Not Supported 00:21:25.261 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:25.261 Command Effects Log Page: Supported 00:21:25.261 Get Log Page Extended Data: Supported 00:21:25.261 Telemetry Log Pages: Not Supported 00:21:25.261 Persistent Event Log Pages: Not Supported 00:21:25.261 Supported Log Pages Log Page: May Support 00:21:25.261 Commands Supported & Effects Log Page: Not Supported 00:21:25.261 Feature Identifiers & Effects Log Page:May Support 00:21:25.261 NVMe-MI Commands & Effects Log Page: May Support 00:21:25.261 Data Area 4 for Telemetry Log: Not Supported 00:21:25.261 Error Log Page Entries Supported: 128 00:21:25.261 Keep Alive: Supported 00:21:25.261 Keep Alive Granularity: 10000 ms 00:21:25.261 00:21:25.261 NVM Command Set Attributes 00:21:25.261 ========================== 00:21:25.261 Submission Queue Entry Size 00:21:25.261 Max: 64 00:21:25.261 Min: 64 00:21:25.261 Completion Queue Entry Size 00:21:25.261 Max: 16 00:21:25.261 Min: 16 00:21:25.261 Number of Namespaces: 32 00:21:25.261 Compare Command: Supported 00:21:25.261 Write Uncorrectable Command: Not Supported 00:21:25.261 Dataset Management Command: Supported 00:21:25.261 Write Zeroes Command: Supported 00:21:25.261 Set Features Save Field: Not Supported 00:21:25.261 Reservations: Supported 00:21:25.261 Timestamp: Not Supported 00:21:25.261 Copy: Supported 00:21:25.261 Volatile Write Cache: Present 00:21:25.261 Atomic Write Unit (Normal): 1 00:21:25.261 Atomic Write Unit (PFail): 1 00:21:25.261 Atomic Compare & Write Unit: 1 00:21:25.261 Fused Compare & Write: Supported 00:21:25.261 Scatter-Gather List 00:21:25.261 SGL Command Set: Supported 00:21:25.261 SGL Keyed: Supported 00:21:25.261 SGL Bit Bucket Descriptor: Not Supported 00:21:25.261 SGL Metadata Pointer: Not Supported 00:21:25.261 Oversized SGL: Not Supported 00:21:25.261 SGL Metadata Address: Not Supported 00:21:25.261 SGL Offset: Supported 00:21:25.261 Transport SGL Data Block: Not Supported 00:21:25.261 Replay Protected Memory Block: Not Supported 00:21:25.261 00:21:25.261 Firmware Slot Information 00:21:25.261 ========================= 00:21:25.261 Active slot: 1 00:21:25.261 Slot 1 Firmware Revision: 24.05 00:21:25.261 00:21:25.261 00:21:25.261 Commands Supported and Effects 00:21:25.261 ============================== 00:21:25.261 Admin Commands 00:21:25.261 -------------- 00:21:25.261 Get Log Page (02h): Supported 00:21:25.261 Identify (06h): Supported 00:21:25.261 Abort (08h): Supported 00:21:25.261 Set Features (09h): Supported 00:21:25.261 Get Features (0Ah): Supported 00:21:25.261 Asynchronous Event Request (0Ch): Supported 00:21:25.261 Keep Alive (18h): Supported 00:21:25.261 I/O Commands 00:21:25.261 ------------ 00:21:25.261 Flush (00h): Supported LBA-Change 00:21:25.261 Write (01h): Supported LBA-Change 00:21:25.261 Read (02h): Supported 00:21:25.261 Compare (05h): Supported 00:21:25.261 Write Zeroes (08h): Supported LBA-Change 00:21:25.261 Dataset Management (09h): Supported LBA-Change 00:21:25.261 Copy (19h): Supported LBA-Change 00:21:25.261 Unknown (79h): Supported LBA-Change 00:21:25.261 Unknown (7Ah): Supported 00:21:25.261 00:21:25.261 Error Log 00:21:25.261 ========= 00:21:25.261 00:21:25.261 Arbitration 00:21:25.261 =========== 00:21:25.261 Arbitration Burst: 1 00:21:25.261 00:21:25.261 Power Management 00:21:25.261 ================ 00:21:25.261 Number of Power States: 1 00:21:25.261 Current Power State: Power State #0 00:21:25.261 Power State #0: 00:21:25.261 Max Power: 0.00 W 00:21:25.261 Non-Operational State: Operational 00:21:25.261 Entry Latency: Not Reported 00:21:25.261 Exit Latency: Not Reported 00:21:25.261 Relative Read Throughput: 0 00:21:25.261 Relative Read Latency: 0 00:21:25.261 Relative Write Throughput: 0 00:21:25.261 Relative Write Latency: 0 00:21:25.261 Idle Power: Not Reported 00:21:25.261 Active Power: Not Reported 00:21:25.261 Non-Operational Permissive Mode: Not Supported 00:21:25.261 00:21:25.261 Health Information 00:21:25.261 ================== 00:21:25.261 Critical Warnings: 00:21:25.261 Available Spare Space: OK 00:21:25.261 Temperature: OK 00:21:25.261 Device Reliability: OK 00:21:25.262 Read Only: No 00:21:25.262 Volatile Memory Backup: OK 00:21:25.262 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:25.262 Temperature Threshold: [2024-05-15 11:07:21.814853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.814858] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e37c30) 00:21:25.262 [2024-05-15 11:07:21.814865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.262 [2024-05-15 11:07:21.814876] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea0320, cid 7, qid 0 00:21:25.262 [2024-05-15 11:07:21.815067] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.262 [2024-05-15 11:07:21.815074] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.262 [2024-05-15 11:07:21.815077] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.815081] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea0320) on tqpair=0x1e37c30 00:21:25.262 [2024-05-15 11:07:21.815111] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:25.262 [2024-05-15 11:07:21.815122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.262 [2024-05-15 11:07:21.815128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.262 [2024-05-15 11:07:21.815134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.262 [2024-05-15 11:07:21.815140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.262 [2024-05-15 11:07:21.815148] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.815152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.815155] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e37c30) 00:21:25.262 [2024-05-15 11:07:21.815162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.262 [2024-05-15 11:07:21.815173] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9fda0, cid 3, qid 0 00:21:25.262 [2024-05-15 11:07:21.815368] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.262 [2024-05-15 11:07:21.815374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.262 [2024-05-15 11:07:21.815378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.815382] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9fda0) on tqpair=0x1e37c30 00:21:25.262 [2024-05-15 11:07:21.815389] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.815393] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.815396] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e37c30) 00:21:25.262 [2024-05-15 11:07:21.815403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.262 [2024-05-15 11:07:21.815415] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9fda0, cid 3, qid 0 00:21:25.262 [2024-05-15 11:07:21.819553] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.262 [2024-05-15 11:07:21.819560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.262 [2024-05-15 11:07:21.819564] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.819568] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9fda0) on tqpair=0x1e37c30 00:21:25.262 [2024-05-15 11:07:21.819573] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:25.262 [2024-05-15 11:07:21.819578] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:25.262 [2024-05-15 11:07:21.819587] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.819591] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.819595] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e37c30) 00:21:25.262 [2024-05-15 11:07:21.819601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.262 [2024-05-15 11:07:21.819612] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9fda0, cid 3, qid 0 00:21:25.262 [2024-05-15 11:07:21.819776] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:25.262 [2024-05-15 11:07:21.819782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:25.262 [2024-05-15 11:07:21.819786] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:25.262 [2024-05-15 11:07:21.819790] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e9fda0) on tqpair=0x1e37c30 00:21:25.262 [2024-05-15 11:07:21.819798] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:21:25.262 0 Kelvin (-273 Celsius) 00:21:25.262 Available Spare: 0% 00:21:25.262 Available Spare Threshold: 0% 00:21:25.262 Life Percentage Used: 0% 00:21:25.262 Data Units Read: 0 00:21:25.262 Data Units Written: 0 00:21:25.262 Host Read Commands: 0 00:21:25.262 Host Write Commands: 0 00:21:25.262 Controller Busy Time: 0 minutes 00:21:25.262 Power Cycles: 0 00:21:25.262 Power On Hours: 0 hours 00:21:25.262 Unsafe Shutdowns: 0 00:21:25.262 Unrecoverable Media Errors: 0 00:21:25.262 Lifetime Error Log Entries: 0 00:21:25.262 Warning Temperature Time: 0 minutes 00:21:25.262 Critical Temperature Time: 0 minutes 00:21:25.262 00:21:25.262 Number of Queues 00:21:25.262 ================ 00:21:25.262 Number of I/O Submission Queues: 127 00:21:25.262 Number of I/O Completion Queues: 127 00:21:25.262 00:21:25.262 Active Namespaces 00:21:25.262 ================= 00:21:25.262 Namespace ID:1 00:21:25.262 Error Recovery Timeout: Unlimited 00:21:25.262 Command Set Identifier: NVM (00h) 00:21:25.262 Deallocate: Supported 00:21:25.262 Deallocated/Unwritten Error: Not Supported 00:21:25.262 Deallocated Read Value: Unknown 00:21:25.262 Deallocate in Write Zeroes: Not Supported 00:21:25.262 Deallocated Guard Field: 0xFFFF 00:21:25.262 Flush: Supported 00:21:25.262 Reservation: Supported 00:21:25.262 Namespace Sharing Capabilities: Multiple Controllers 00:21:25.262 Size (in LBAs): 131072 (0GiB) 00:21:25.262 Capacity (in LBAs): 131072 (0GiB) 00:21:25.262 Utilization (in LBAs): 131072 (0GiB) 00:21:25.262 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:25.262 EUI64: ABCDEF0123456789 00:21:25.262 UUID: c19b6dbf-2a5c-435e-be6f-eb3e6a4753c5 00:21:25.262 Thin Provisioning: Not Supported 00:21:25.262 Per-NS Atomic Units: Yes 00:21:25.262 Atomic Boundary Size (Normal): 0 00:21:25.262 Atomic Boundary Size (PFail): 0 00:21:25.262 Atomic Boundary Offset: 0 00:21:25.262 Maximum Single Source Range Length: 65535 00:21:25.262 Maximum Copy Length: 65535 00:21:25.262 Maximum Source Range Count: 1 00:21:25.262 NGUID/EUI64 Never Reused: No 00:21:25.262 Namespace Write Protected: No 00:21:25.262 Number of LBA Formats: 1 00:21:25.262 Current LBA Format: LBA Format #00 00:21:25.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:25.262 00:21:25.262 11:07:21 -- host/identify.sh@51 -- # sync 00:21:25.262 11:07:21 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.262 11:07:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.262 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:21:25.262 11:07:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.262 11:07:21 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:25.262 11:07:21 -- host/identify.sh@56 -- # nvmftestfini 00:21:25.262 11:07:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:25.262 11:07:21 -- nvmf/common.sh@117 -- # sync 00:21:25.262 11:07:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.262 11:07:21 -- nvmf/common.sh@120 -- # set +e 00:21:25.262 11:07:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.262 11:07:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.262 rmmod nvme_tcp 00:21:25.262 rmmod nvme_fabrics 00:21:25.262 rmmod nvme_keyring 00:21:25.262 11:07:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.262 11:07:21 -- nvmf/common.sh@124 -- # set -e 00:21:25.523 11:07:21 -- nvmf/common.sh@125 -- # return 0 00:21:25.523 11:07:21 -- nvmf/common.sh@478 -- # '[' -n 414041 ']' 00:21:25.523 11:07:21 -- nvmf/common.sh@479 -- # killprocess 414041 00:21:25.523 11:07:21 -- common/autotest_common.sh@946 -- # '[' -z 414041 ']' 00:21:25.523 11:07:21 -- common/autotest_common.sh@950 -- # kill -0 414041 00:21:25.523 11:07:21 -- common/autotest_common.sh@951 -- # uname 00:21:25.523 11:07:21 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:25.523 11:07:21 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 414041 00:21:25.523 11:07:21 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:25.523 11:07:21 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:25.523 11:07:21 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 414041' 00:21:25.523 killing process with pid 414041 00:21:25.523 11:07:21 -- common/autotest_common.sh@965 -- # kill 414041 00:21:25.523 [2024-05-15 11:07:21.959127] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:25.523 11:07:21 -- common/autotest_common.sh@970 -- # wait 414041 00:21:25.523 11:07:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:25.523 11:07:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:25.523 11:07:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:25.523 11:07:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.523 11:07:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.523 11:07:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.523 11:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.523 11:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.065 11:07:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:28.065 00:21:28.065 real 0m10.849s 00:21:28.065 user 0m7.806s 00:21:28.065 sys 0m5.539s 00:21:28.065 11:07:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:28.065 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:21:28.065 ************************************ 00:21:28.065 END TEST nvmf_identify 00:21:28.065 ************************************ 00:21:28.065 11:07:24 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:28.065 11:07:24 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:28.065 11:07:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:28.065 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:21:28.065 ************************************ 00:21:28.065 START TEST nvmf_perf 00:21:28.065 ************************************ 00:21:28.065 11:07:24 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:28.065 * Looking for test storage... 00:21:28.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.065 11:07:24 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.065 11:07:24 -- nvmf/common.sh@7 -- # uname -s 00:21:28.065 11:07:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.065 11:07:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.065 11:07:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.066 11:07:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.066 11:07:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.066 11:07:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.066 11:07:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.066 11:07:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.066 11:07:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.066 11:07:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.066 11:07:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.066 11:07:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.066 11:07:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.066 11:07:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.066 11:07:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.066 11:07:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.066 11:07:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.066 11:07:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.066 11:07:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.066 11:07:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.066 11:07:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.066 11:07:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.066 11:07:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.066 11:07:24 -- paths/export.sh@5 -- # export PATH 00:21:28.066 11:07:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.066 11:07:24 -- nvmf/common.sh@47 -- # : 0 00:21:28.066 11:07:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.066 11:07:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.066 11:07:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.066 11:07:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.066 11:07:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.066 11:07:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.066 11:07:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.066 11:07:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.066 11:07:24 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:28.066 11:07:24 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:28.066 11:07:24 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:28.066 11:07:24 -- host/perf.sh@17 -- # nvmftestinit 00:21:28.066 11:07:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:28.066 11:07:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.066 11:07:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:28.066 11:07:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:28.066 11:07:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:28.066 11:07:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.066 11:07:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.066 11:07:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.066 11:07:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:28.066 11:07:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:28.066 11:07:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.066 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:21:34.648 11:07:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:34.648 11:07:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.648 11:07:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.648 11:07:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.648 11:07:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.648 11:07:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.648 11:07:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.648 11:07:31 -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.648 11:07:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.648 11:07:31 -- nvmf/common.sh@296 -- # e810=() 00:21:34.648 11:07:31 -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.648 11:07:31 -- nvmf/common.sh@297 -- # x722=() 00:21:34.648 11:07:31 -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.648 11:07:31 -- nvmf/common.sh@298 -- # mlx=() 00:21:34.648 11:07:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.648 11:07:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.648 11:07:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.648 11:07:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.648 11:07:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.648 11:07:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.649 11:07:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.649 11:07:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.649 11:07:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:34.649 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:34.649 11:07:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.649 11:07:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:34.649 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:34.649 11:07:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.649 11:07:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.649 11:07:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.649 11:07:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.649 11:07:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.649 11:07:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:34.649 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:34.649 11:07:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.649 11:07:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.649 11:07:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.649 11:07:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.649 11:07:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.649 11:07:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:34.649 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:34.649 11:07:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.649 11:07:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:34.649 11:07:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:34.649 11:07:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:34.649 11:07:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:34.649 11:07:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.649 11:07:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.649 11:07:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.649 11:07:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.649 11:07:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.649 11:07:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.649 11:07:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.649 11:07:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.649 11:07:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.649 11:07:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.649 11:07:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.649 11:07:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.649 11:07:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.909 11:07:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.909 11:07:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.909 11:07:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.909 11:07:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.909 11:07:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.909 11:07:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.909 11:07:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:21:34.909 00:21:34.909 --- 10.0.0.2 ping statistics --- 00:21:34.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.909 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:21:34.909 11:07:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:21:34.909 00:21:34.909 --- 10.0.0.1 ping statistics --- 00:21:34.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.909 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:21:34.909 11:07:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.909 11:07:31 -- nvmf/common.sh@411 -- # return 0 00:21:34.909 11:07:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:34.909 11:07:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.909 11:07:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:34.909 11:07:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:34.909 11:07:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.909 11:07:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:34.909 11:07:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:34.909 11:07:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:34.909 11:07:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:34.909 11:07:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:34.909 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:21:34.909 11:07:31 -- nvmf/common.sh@470 -- # nvmfpid=418394 00:21:34.909 11:07:31 -- nvmf/common.sh@471 -- # waitforlisten 418394 00:21:34.909 11:07:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:34.909 11:07:31 -- common/autotest_common.sh@827 -- # '[' -z 418394 ']' 00:21:34.909 11:07:31 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.909 11:07:31 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:34.909 11:07:31 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.909 11:07:31 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:34.909 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:21:35.169 [2024-05-15 11:07:31.611855] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:21:35.169 [2024-05-15 11:07:31.611908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.169 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.169 [2024-05-15 11:07:31.679066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.169 [2024-05-15 11:07:31.743902] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.169 [2024-05-15 11:07:31.743940] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.169 [2024-05-15 11:07:31.743948] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.169 [2024-05-15 11:07:31.743954] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.169 [2024-05-15 11:07:31.743959] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.169 [2024-05-15 11:07:31.744103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.169 [2024-05-15 11:07:31.744216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.169 [2024-05-15 11:07:31.744373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.169 [2024-05-15 11:07:31.744373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.739 11:07:32 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:35.739 11:07:32 -- common/autotest_common.sh@860 -- # return 0 00:21:35.739 11:07:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:35.739 11:07:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.739 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:35.999 11:07:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.999 11:07:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:35.999 11:07:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:36.259 11:07:32 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:36.259 11:07:32 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:36.519 11:07:33 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:21:36.519 11:07:33 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:36.779 11:07:33 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:36.779 11:07:33 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:21:36.779 11:07:33 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:36.779 11:07:33 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:36.779 11:07:33 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.779 [2024-05-15 11:07:33.392636] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.779 11:07:33 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.040 11:07:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:37.040 11:07:33 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.300 11:07:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:37.301 11:07:33 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:37.301 11:07:33 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.561 [2024-05-15 11:07:34.070947] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:37.561 [2024-05-15 11:07:34.071192] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.561 11:07:34 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:37.821 11:07:34 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:21:37.821 11:07:34 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:37.821 11:07:34 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:37.821 11:07:34 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:39.202 Initializing NVMe Controllers 00:21:39.202 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:21:39.202 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:21:39.202 Initialization complete. Launching workers. 00:21:39.202 ======================================================== 00:21:39.202 Latency(us) 00:21:39.202 Device Information : IOPS MiB/s Average min max 00:21:39.202 PCIE (0000:65:00.0) NSID 1 from core 0: 80116.27 312.95 398.87 54.72 5207.00 00:21:39.202 ======================================================== 00:21:39.203 Total : 80116.27 312.95 398.87 54.72 5207.00 00:21:39.203 00:21:39.203 11:07:35 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:39.203 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.583 Initializing NVMe Controllers 00:21:40.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:40.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:40.583 Initialization complete. Launching workers. 00:21:40.583 ======================================================== 00:21:40.583 Latency(us) 00:21:40.583 Device Information : IOPS MiB/s Average min max 00:21:40.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 9201.72 102.93 45606.07 00:21:40.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24527.19 6983.07 48884.53 00:21:40.583 ======================================================== 00:21:40.584 Total : 154.00 0.60 13281.88 102.93 48884.53 00:21:40.584 00:21:40.584 11:07:36 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.584 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.523 Initializing NVMe Controllers 00:21:41.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:41.523 Initialization complete. Launching workers. 00:21:41.523 ======================================================== 00:21:41.523 Latency(us) 00:21:41.523 Device Information : IOPS MiB/s Average min max 00:21:41.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11687.99 45.66 2743.47 491.69 8403.89 00:21:41.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3821.00 14.93 8424.60 6803.59 16969.53 00:21:41.523 ======================================================== 00:21:41.523 Total : 15508.98 60.58 4143.15 491.69 16969.53 00:21:41.523 00:21:41.523 11:07:38 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:41.523 11:07:38 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:41.523 11:07:38 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.523 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.064 Initializing NVMe Controllers 00:21:44.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.064 Controller IO queue size 128, less than required. 00:21:44.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.064 Controller IO queue size 128, less than required. 00:21:44.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:44.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:44.065 Initialization complete. Launching workers. 00:21:44.065 ======================================================== 00:21:44.065 Latency(us) 00:21:44.065 Device Information : IOPS MiB/s Average min max 00:21:44.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1597.81 399.45 80754.61 48912.90 121674.66 00:21:44.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.51 146.63 226112.54 63807.34 371532.88 00:21:44.065 ======================================================== 00:21:44.065 Total : 2184.32 546.08 119784.67 48912.90 371532.88 00:21:44.065 00:21:44.065 11:07:40 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:44.065 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.065 No valid NVMe controllers or AIO or URING devices found 00:21:44.065 Initializing NVMe Controllers 00:21:44.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.065 Controller IO queue size 128, less than required. 00:21:44.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.065 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:44.065 Controller IO queue size 128, less than required. 00:21:44.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.065 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:44.065 WARNING: Some requested NVMe devices were skipped 00:21:44.065 11:07:40 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:44.065 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.360 Initializing NVMe Controllers 00:21:47.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.360 Controller IO queue size 128, less than required. 00:21:47.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.360 Controller IO queue size 128, less than required. 00:21:47.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:47.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:47.360 Initialization complete. Launching workers. 00:21:47.360 00:21:47.360 ==================== 00:21:47.360 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:47.360 TCP transport: 00:21:47.360 polls: 25295 00:21:47.360 idle_polls: 12015 00:21:47.360 sock_completions: 13280 00:21:47.360 nvme_completions: 6129 00:21:47.360 submitted_requests: 9136 00:21:47.361 queued_requests: 1 00:21:47.361 00:21:47.361 ==================== 00:21:47.361 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:47.361 TCP transport: 00:21:47.361 polls: 25762 00:21:47.361 idle_polls: 12096 00:21:47.361 sock_completions: 13666 00:21:47.361 nvme_completions: 6965 00:21:47.361 submitted_requests: 10500 00:21:47.361 queued_requests: 1 00:21:47.361 ======================================================== 00:21:47.361 Latency(us) 00:21:47.361 Device Information : IOPS MiB/s Average min max 00:21:47.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1531.40 382.85 84945.29 42286.49 141405.04 00:21:47.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1740.32 435.08 74809.97 39155.85 109226.23 00:21:47.361 ======================================================== 00:21:47.361 Total : 3271.73 817.93 79554.03 39155.85 141405.04 00:21:47.361 00:21:47.361 11:07:43 -- host/perf.sh@66 -- # sync 00:21:47.361 11:07:43 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.361 11:07:43 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:47.361 11:07:43 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:47.361 11:07:43 -- host/perf.sh@114 -- # nvmftestfini 00:21:47.361 11:07:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:47.361 11:07:43 -- nvmf/common.sh@117 -- # sync 00:21:47.361 11:07:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.361 11:07:43 -- nvmf/common.sh@120 -- # set +e 00:21:47.361 11:07:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.361 11:07:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.361 rmmod nvme_tcp 00:21:47.361 rmmod nvme_fabrics 00:21:47.361 rmmod nvme_keyring 00:21:47.361 11:07:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.361 11:07:43 -- nvmf/common.sh@124 -- # set -e 00:21:47.361 11:07:43 -- nvmf/common.sh@125 -- # return 0 00:21:47.361 11:07:43 -- nvmf/common.sh@478 -- # '[' -n 418394 ']' 00:21:47.361 11:07:43 -- nvmf/common.sh@479 -- # killprocess 418394 00:21:47.361 11:07:43 -- common/autotest_common.sh@946 -- # '[' -z 418394 ']' 00:21:47.361 11:07:43 -- common/autotest_common.sh@950 -- # kill -0 418394 00:21:47.361 11:07:43 -- common/autotest_common.sh@951 -- # uname 00:21:47.361 11:07:43 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:47.361 11:07:43 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 418394 00:21:47.361 11:07:43 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:47.361 11:07:43 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:47.361 11:07:43 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 418394' 00:21:47.361 killing process with pid 418394 00:21:47.361 11:07:43 -- common/autotest_common.sh@965 -- # kill 418394 00:21:47.361 [2024-05-15 11:07:43.561658] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:47.361 11:07:43 -- common/autotest_common.sh@970 -- # wait 418394 00:21:49.271 11:07:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:49.271 11:07:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:49.271 11:07:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:49.271 11:07:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.271 11:07:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.271 11:07:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.271 11:07:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.271 11:07:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.181 11:07:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.181 00:21:51.181 real 0m23.349s 00:21:51.181 user 0m56.616s 00:21:51.181 sys 0m7.862s 00:21:51.181 11:07:47 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:51.181 11:07:47 -- common/autotest_common.sh@10 -- # set +x 00:21:51.181 ************************************ 00:21:51.181 END TEST nvmf_perf 00:21:51.181 ************************************ 00:21:51.181 11:07:47 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:51.181 11:07:47 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:51.181 11:07:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:51.181 11:07:47 -- common/autotest_common.sh@10 -- # set +x 00:21:51.181 ************************************ 00:21:51.181 START TEST nvmf_fio_host 00:21:51.181 ************************************ 00:21:51.181 11:07:47 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:51.181 * Looking for test storage... 00:21:51.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.181 11:07:47 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.181 11:07:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.181 11:07:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.181 11:07:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.181 11:07:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- paths/export.sh@5 -- # export PATH 00:21:51.181 11:07:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.181 11:07:47 -- nvmf/common.sh@7 -- # uname -s 00:21:51.181 11:07:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.181 11:07:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.181 11:07:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.181 11:07:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.181 11:07:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.181 11:07:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.181 11:07:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.181 11:07:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.181 11:07:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.181 11:07:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.181 11:07:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.181 11:07:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.181 11:07:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.181 11:07:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.181 11:07:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.181 11:07:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.181 11:07:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.181 11:07:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.181 11:07:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.181 11:07:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.181 11:07:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- paths/export.sh@5 -- # export PATH 00:21:51.181 11:07:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.181 11:07:47 -- nvmf/common.sh@47 -- # : 0 00:21:51.181 11:07:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.181 11:07:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.181 11:07:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.181 11:07:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.182 11:07:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.182 11:07:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.182 11:07:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.182 11:07:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.442 11:07:47 -- host/fio.sh@12 -- # nvmftestinit 00:21:51.442 11:07:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:51.442 11:07:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.442 11:07:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:51.442 11:07:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:51.442 11:07:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:51.442 11:07:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.442 11:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.442 11:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.442 11:07:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:51.442 11:07:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:51.442 11:07:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.442 11:07:47 -- common/autotest_common.sh@10 -- # set +x 00:21:58.028 11:07:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:58.028 11:07:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.028 11:07:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.028 11:07:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.028 11:07:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.028 11:07:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.028 11:07:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.028 11:07:54 -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.028 11:07:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.028 11:07:54 -- nvmf/common.sh@296 -- # e810=() 00:21:58.028 11:07:54 -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.028 11:07:54 -- nvmf/common.sh@297 -- # x722=() 00:21:58.028 11:07:54 -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.028 11:07:54 -- nvmf/common.sh@298 -- # mlx=() 00:21:58.028 11:07:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.028 11:07:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.028 11:07:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.028 11:07:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.028 11:07:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.028 11:07:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.028 11:07:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:58.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:58.028 11:07:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.028 11:07:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:58.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:58.028 11:07:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.028 11:07:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.028 11:07:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.028 11:07:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:58.028 11:07:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.028 11:07:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:58.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:58.028 11:07:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.028 11:07:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.028 11:07:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.028 11:07:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:58.028 11:07:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.028 11:07:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:58.028 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:58.028 11:07:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.028 11:07:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:58.028 11:07:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:58.028 11:07:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:58.028 11:07:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:58.028 11:07:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.028 11:07:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.028 11:07:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.028 11:07:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.028 11:07:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.028 11:07:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.028 11:07:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.028 11:07:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.028 11:07:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.028 11:07:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.028 11:07:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.028 11:07:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.028 11:07:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.028 11:07:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.028 11:07:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.028 11:07:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.028 11:07:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.289 11:07:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.289 11:07:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.289 11:07:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:21:58.289 00:21:58.289 --- 10.0.0.2 ping statistics --- 00:21:58.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.289 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:21:58.289 11:07:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:58.289 00:21:58.289 --- 10.0.0.1 ping statistics --- 00:21:58.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.289 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:58.289 11:07:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.289 11:07:54 -- nvmf/common.sh@411 -- # return 0 00:21:58.289 11:07:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:58.289 11:07:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.289 11:07:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:58.289 11:07:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:58.289 11:07:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.289 11:07:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:58.289 11:07:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:58.289 11:07:54 -- host/fio.sh@14 -- # [[ y != y ]] 00:21:58.289 11:07:54 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:58.289 11:07:54 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:58.289 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:21:58.289 11:07:54 -- host/fio.sh@22 -- # nvmfpid=425331 00:21:58.290 11:07:54 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.290 11:07:54 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.290 11:07:54 -- host/fio.sh@26 -- # waitforlisten 425331 00:21:58.290 11:07:54 -- common/autotest_common.sh@827 -- # '[' -z 425331 ']' 00:21:58.290 11:07:54 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.290 11:07:54 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:58.290 11:07:54 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.290 11:07:54 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:58.290 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:21:58.290 [2024-05-15 11:07:54.863781] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:21:58.290 [2024-05-15 11:07:54.863851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.290 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.290 [2024-05-15 11:07:54.936484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.550 [2024-05-15 11:07:55.013387] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.550 [2024-05-15 11:07:55.013428] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.550 [2024-05-15 11:07:55.013436] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.550 [2024-05-15 11:07:55.013443] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.550 [2024-05-15 11:07:55.013448] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.550 [2024-05-15 11:07:55.013584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.550 [2024-05-15 11:07:55.013671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.550 [2024-05-15 11:07:55.013816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.550 [2024-05-15 11:07:55.013817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.120 11:07:55 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:59.120 11:07:55 -- common/autotest_common.sh@860 -- # return 0 00:21:59.120 11:07:55 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.120 11:07:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.120 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:21:59.120 [2024-05-15 11:07:55.654976] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.120 11:07:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.120 11:07:55 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:59.120 11:07:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.120 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:21:59.120 11:07:55 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.120 11:07:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.120 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:21:59.120 Malloc1 00:21:59.120 11:07:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.120 11:07:55 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.120 11:07:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.120 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:21:59.120 11:07:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.120 11:07:55 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:59.120 11:07:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.120 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:21:59.120 11:07:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.120 11:07:55 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.120 11:07:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.120 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:21:59.120 [2024-05-15 11:07:55.750269] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:59.120 [2024-05-15 11:07:55.750486] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.120 11:07:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.120 11:07:55 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:59.120 11:07:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.120 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:21:59.120 11:07:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.120 11:07:55 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:59.120 11:07:55 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:59.120 11:07:55 -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:59.120 11:07:55 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:59.120 11:07:55 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:59.120 11:07:55 -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:59.120 11:07:55 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:59.120 11:07:55 -- common/autotest_common.sh@1337 -- # shift 00:21:59.120 11:07:55 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:59.120 11:07:55 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # grep libasan 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:59.405 11:07:55 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:59.405 11:07:55 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:59.405 11:07:55 -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:59.405 11:07:55 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:59.405 11:07:55 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:59.405 11:07:55 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:59.670 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:59.670 fio-3.35 00:21:59.670 Starting 1 thread 00:21:59.670 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.209 00:22:02.209 test: (groupid=0, jobs=1): err= 0: pid=425763: Wed May 15 11:07:58 2024 00:22:02.209 read: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(86.8MiB/2004msec) 00:22:02.209 slat (usec): min=2, max=275, avg= 2.20, stdev= 2.60 00:22:02.209 clat (usec): min=3688, max=9780, avg=6387.52, stdev=1101.60 00:22:02.209 lat (usec): min=3724, max=9786, avg=6389.72, stdev=1101.62 00:22:02.209 clat percentiles (usec): 00:22:02.209 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 5014], 00:22:02.209 | 30.00th=[ 5407], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 6980], 00:22:02.209 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7767], 00:22:02.209 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[ 9241], 99.95th=[ 9503], 00:22:02.209 | 99.99th=[ 9503] 00:22:02.209 bw ( KiB/s): min=39352, max=57248, per=99.85%, avg=44286.00, stdev=8652.69, samples=4 00:22:02.209 iops : min= 9838, max=14312, avg=11071.50, stdev=2163.17, samples=4 00:22:02.209 write: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(86.5MiB/2004msec); 0 zone resets 00:22:02.209 slat (usec): min=2, max=265, avg= 2.29, stdev= 1.98 00:22:02.209 clat (usec): min=2871, max=8232, avg=5135.26, stdev=874.90 00:22:02.209 lat (usec): min=2888, max=8239, avg=5137.55, stdev=874.97 00:22:02.209 clat percentiles (usec): 00:22:02.209 | 1.00th=[ 3490], 5.00th=[ 3720], 10.00th=[ 3851], 20.00th=[ 4047], 00:22:02.209 | 30.00th=[ 4359], 40.00th=[ 5211], 50.00th=[ 5407], 60.00th=[ 5604], 00:22:02.209 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 6063], 95.00th=[ 6194], 00:22:02.209 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7701], 99.95th=[ 7898], 00:22:02.209 | 99.99th=[ 7963] 00:22:02.209 bw ( KiB/s): min=39840, max=56936, per=99.97%, avg=44210.00, stdev=8485.49, samples=4 00:22:02.209 iops : min= 9960, max=14234, avg=11052.50, stdev=2121.37, samples=4 00:22:02.209 lat (msec) : 4=8.67%, 10=91.33% 00:22:02.209 cpu : usr=72.24%, sys=26.66%, ctx=32, majf=0, minf=4 00:22:02.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:02.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:02.209 issued rwts: total=22220,22155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:02.209 00:22:02.209 Run status group 0 (all jobs): 00:22:02.209 READ: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=86.8MiB (91.0MB), run=2004-2004msec 00:22:02.209 WRITE: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=86.5MiB (90.7MB), run=2004-2004msec 00:22:02.209 11:07:58 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:02.209 11:07:58 -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:02.209 11:07:58 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:02.209 11:07:58 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:02.209 11:07:58 -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:02.209 11:07:58 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:02.209 11:07:58 -- common/autotest_common.sh@1337 -- # shift 00:22:02.209 11:07:58 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:02.209 11:07:58 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # grep libasan 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:02.209 11:07:58 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:02.209 11:07:58 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:02.209 11:07:58 -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:02.209 11:07:58 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:02.209 11:07:58 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:02.209 11:07:58 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:02.209 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:02.209 fio-3.35 00:22:02.209 Starting 1 thread 00:22:02.469 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.378 [2024-05-15 11:08:01.016223] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1060780 is same with the state(5) to be set 00:22:04.378 [2024-05-15 11:08:01.016305] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1060780 is same with the state(5) to be set 00:22:04.638 [2024-05-15 11:08:01.169458] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ff10 is same with the state(5) to be set 00:22:04.638 00:22:04.638 test: (groupid=0, jobs=1): err= 0: pid=426464: Wed May 15 11:08:01 2024 00:22:04.638 read: IOPS=9395, BW=147MiB/s (154MB/s)(295MiB/2007msec) 00:22:04.638 slat (usec): min=3, max=107, avg= 3.68, stdev= 1.64 00:22:04.638 clat (usec): min=1026, max=15591, avg=8284.39, stdev=1947.95 00:22:04.638 lat (usec): min=1030, max=15608, avg=8288.07, stdev=1948.14 00:22:04.638 clat percentiles (usec): 00:22:04.638 | 1.00th=[ 4178], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:22:04.638 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 8979], 00:22:04.638 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11207], 00:22:04.638 | 99.00th=[12911], 99.50th=[13960], 99.90th=[14877], 99.95th=[15139], 00:22:04.638 | 99.99th=[15533] 00:22:04.638 bw ( KiB/s): min=66560, max=82299, per=49.52%, avg=74438.75, stdev=6436.59, samples=4 00:22:04.638 iops : min= 4160, max= 5143, avg=4652.25, stdev=402.01, samples=4 00:22:04.638 write: IOPS=5624, BW=87.9MiB/s (92.2MB/s)(152MiB/1734msec); 0 zone resets 00:22:04.638 slat (usec): min=40, max=447, avg=41.42, stdev= 9.41 00:22:04.638 clat (usec): min=2706, max=17131, avg=9393.57, stdev=1688.90 00:22:04.638 lat (usec): min=2746, max=17264, avg=9435.00, stdev=1691.73 00:22:04.638 clat percentiles (usec): 00:22:04.638 | 1.00th=[ 5800], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8029], 00:22:04.638 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:22:04.638 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11338], 95.00th=[12256], 00:22:04.638 | 99.00th=[14484], 99.50th=[15401], 99.90th=[16909], 99.95th=[16909], 00:22:04.638 | 99.99th=[17171] 00:22:04.638 bw ( KiB/s): min=68672, max=86035, per=86.04%, avg=77428.75, stdev=7089.72, samples=4 00:22:04.638 iops : min= 4292, max= 5377, avg=4839.25, stdev=443.03, samples=4 00:22:04.638 lat (msec) : 2=0.02%, 4=0.63%, 10=76.00%, 20=23.34% 00:22:04.638 cpu : usr=84.55%, sys=14.06%, ctx=18, majf=0, minf=14 00:22:04.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:04.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:04.638 issued rwts: total=18856,9753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:04.638 00:22:04.638 Run status group 0 (all jobs): 00:22:04.638 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2007-2007msec 00:22:04.638 WRITE: bw=87.9MiB/s (92.2MB/s), 87.9MiB/s-87.9MiB/s (92.2MB/s-92.2MB/s), io=152MiB (160MB), run=1734-1734msec 00:22:04.638 11:08:01 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.638 11:08:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.638 11:08:01 -- common/autotest_common.sh@10 -- # set +x 00:22:04.638 11:08:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.638 11:08:01 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:04.638 11:08:01 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:04.638 11:08:01 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:04.638 11:08:01 -- host/fio.sh@84 -- # nvmftestfini 00:22:04.638 11:08:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:04.638 11:08:01 -- nvmf/common.sh@117 -- # sync 00:22:04.638 11:08:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.638 11:08:01 -- nvmf/common.sh@120 -- # set +e 00:22:04.638 11:08:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.638 11:08:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.638 rmmod nvme_tcp 00:22:04.638 rmmod nvme_fabrics 00:22:04.638 rmmod nvme_keyring 00:22:04.638 11:08:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.638 11:08:01 -- nvmf/common.sh@124 -- # set -e 00:22:04.638 11:08:01 -- nvmf/common.sh@125 -- # return 0 00:22:04.899 11:08:01 -- nvmf/common.sh@478 -- # '[' -n 425331 ']' 00:22:04.899 11:08:01 -- nvmf/common.sh@479 -- # killprocess 425331 00:22:04.899 11:08:01 -- common/autotest_common.sh@946 -- # '[' -z 425331 ']' 00:22:04.899 11:08:01 -- common/autotest_common.sh@950 -- # kill -0 425331 00:22:04.899 11:08:01 -- common/autotest_common.sh@951 -- # uname 00:22:04.899 11:08:01 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:04.899 11:08:01 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 425331 00:22:04.899 11:08:01 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:04.899 11:08:01 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:04.899 11:08:01 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 425331' 00:22:04.899 killing process with pid 425331 00:22:04.899 11:08:01 -- common/autotest_common.sh@965 -- # kill 425331 00:22:04.899 [2024-05-15 11:08:01.349832] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:04.899 11:08:01 -- common/autotest_common.sh@970 -- # wait 425331 00:22:04.899 11:08:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:04.899 11:08:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:04.899 11:08:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:04.899 11:08:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.899 11:08:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.899 11:08:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.899 11:08:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.899 11:08:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.444 11:08:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:07.444 00:22:07.444 real 0m15.873s 00:22:07.444 user 0m57.360s 00:22:07.444 sys 0m6.945s 00:22:07.444 11:08:03 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:07.444 11:08:03 -- common/autotest_common.sh@10 -- # set +x 00:22:07.444 ************************************ 00:22:07.444 END TEST nvmf_fio_host 00:22:07.444 ************************************ 00:22:07.444 11:08:03 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:07.444 11:08:03 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:07.444 11:08:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:07.444 11:08:03 -- common/autotest_common.sh@10 -- # set +x 00:22:07.444 ************************************ 00:22:07.444 START TEST nvmf_failover 00:22:07.444 ************************************ 00:22:07.444 11:08:03 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:07.444 * Looking for test storage... 00:22:07.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.444 11:08:03 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.444 11:08:03 -- nvmf/common.sh@7 -- # uname -s 00:22:07.444 11:08:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.444 11:08:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.444 11:08:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.444 11:08:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.444 11:08:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.444 11:08:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.444 11:08:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.444 11:08:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.444 11:08:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.444 11:08:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.444 11:08:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.444 11:08:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.444 11:08:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.444 11:08:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.444 11:08:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.444 11:08:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.444 11:08:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.444 11:08:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.444 11:08:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.444 11:08:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.444 11:08:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.444 11:08:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.444 11:08:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.444 11:08:03 -- paths/export.sh@5 -- # export PATH 00:22:07.444 11:08:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.444 11:08:03 -- nvmf/common.sh@47 -- # : 0 00:22:07.444 11:08:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.444 11:08:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.444 11:08:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.444 11:08:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.444 11:08:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.444 11:08:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.445 11:08:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.445 11:08:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.445 11:08:03 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.445 11:08:03 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.445 11:08:03 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:07.445 11:08:03 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:07.445 11:08:03 -- host/failover.sh@18 -- # nvmftestinit 00:22:07.445 11:08:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:07.445 11:08:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.445 11:08:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:07.445 11:08:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:07.445 11:08:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:07.445 11:08:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.445 11:08:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.445 11:08:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.445 11:08:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:07.445 11:08:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:07.445 11:08:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.445 11:08:03 -- common/autotest_common.sh@10 -- # set +x 00:22:14.025 11:08:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:14.025 11:08:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.025 11:08:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.025 11:08:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.025 11:08:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.025 11:08:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.025 11:08:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.025 11:08:10 -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.025 11:08:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.025 11:08:10 -- nvmf/common.sh@296 -- # e810=() 00:22:14.025 11:08:10 -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.025 11:08:10 -- nvmf/common.sh@297 -- # x722=() 00:22:14.025 11:08:10 -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.025 11:08:10 -- nvmf/common.sh@298 -- # mlx=() 00:22:14.025 11:08:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.025 11:08:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.025 11:08:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.025 11:08:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.025 11:08:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.025 11:08:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.025 11:08:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:14.025 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:14.025 11:08:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.025 11:08:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:14.025 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:14.025 11:08:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.025 11:08:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.025 11:08:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.025 11:08:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:14.025 11:08:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.025 11:08:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:14.025 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:14.025 11:08:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.025 11:08:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.025 11:08:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.025 11:08:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:14.025 11:08:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.025 11:08:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:14.025 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:14.025 11:08:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.025 11:08:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:14.025 11:08:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:14.025 11:08:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:14.025 11:08:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:14.025 11:08:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.025 11:08:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.025 11:08:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.025 11:08:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.025 11:08:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.025 11:08:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.025 11:08:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.025 11:08:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.025 11:08:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.025 11:08:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.025 11:08:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.025 11:08:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.026 11:08:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.026 11:08:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.026 11:08:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.026 11:08:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.026 11:08:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.285 11:08:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.285 11:08:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.285 11:08:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:22:14.285 00:22:14.285 --- 10.0.0.2 ping statistics --- 00:22:14.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.285 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:22:14.285 11:08:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:22:14.285 00:22:14.285 --- 10.0.0.1 ping statistics --- 00:22:14.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.285 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:22:14.285 11:08:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.285 11:08:10 -- nvmf/common.sh@411 -- # return 0 00:22:14.285 11:08:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:14.285 11:08:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.285 11:08:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:14.285 11:08:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:14.285 11:08:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.285 11:08:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:14.285 11:08:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:14.285 11:08:10 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:14.285 11:08:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:14.285 11:08:10 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:14.285 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:22:14.285 11:08:10 -- nvmf/common.sh@470 -- # nvmfpid=431078 00:22:14.285 11:08:10 -- nvmf/common.sh@471 -- # waitforlisten 431078 00:22:14.285 11:08:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:14.285 11:08:10 -- common/autotest_common.sh@827 -- # '[' -z 431078 ']' 00:22:14.285 11:08:10 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.285 11:08:10 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.285 11:08:10 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.285 11:08:10 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.285 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:22:14.286 [2024-05-15 11:08:10.821570] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:22:14.286 [2024-05-15 11:08:10.821642] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.286 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.286 [2024-05-15 11:08:10.907427] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:14.544 [2024-05-15 11:08:11.000491] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.544 [2024-05-15 11:08:11.000543] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.544 [2024-05-15 11:08:11.000558] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.544 [2024-05-15 11:08:11.000565] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.544 [2024-05-15 11:08:11.000572] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.544 [2024-05-15 11:08:11.000695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.544 [2024-05-15 11:08:11.001047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.544 [2024-05-15 11:08:11.001049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.114 11:08:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.114 11:08:11 -- common/autotest_common.sh@860 -- # return 0 00:22:15.114 11:08:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:15.114 11:08:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.114 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:22:15.114 11:08:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.114 11:08:11 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:15.374 [2024-05-15 11:08:11.775287] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.374 11:08:11 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:15.374 Malloc0 00:22:15.374 11:08:12 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.633 11:08:12 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.892 11:08:12 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.892 [2024-05-15 11:08:12.476472] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:15.892 [2024-05-15 11:08:12.476699] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.892 11:08:12 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:16.151 [2024-05-15 11:08:12.645120] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:16.151 11:08:12 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:16.411 [2024-05-15 11:08:12.809662] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:16.411 11:08:12 -- host/failover.sh@31 -- # bdevperf_pid=431477 00:22:16.411 11:08:12 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:16.411 11:08:12 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.411 11:08:12 -- host/failover.sh@34 -- # waitforlisten 431477 /var/tmp/bdevperf.sock 00:22:16.411 11:08:12 -- common/autotest_common.sh@827 -- # '[' -z 431477 ']' 00:22:16.411 11:08:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.411 11:08:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:16.411 11:08:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.412 11:08:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:16.412 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:22:17.352 11:08:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:17.352 11:08:13 -- common/autotest_common.sh@860 -- # return 0 00:22:17.352 11:08:13 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:17.352 NVMe0n1 00:22:17.352 11:08:13 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:17.924 00:22:17.924 11:08:14 -- host/failover.sh@39 -- # run_test_pid=431812 00:22:17.924 11:08:14 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:17.924 11:08:14 -- host/failover.sh@41 -- # sleep 1 00:22:18.865 11:08:15 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.865 [2024-05-15 11:08:15.465666] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ad30 is same with the state(5) to be set 00:22:18.865 [2024-05-15 11:08:15.465708] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ad30 is same with the state(5) to be set 00:22:18.865 [2024-05-15 11:08:15.465714] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ad30 is same with the state(5) to be set 00:22:18.865 [2024-05-15 11:08:15.465719] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ad30 is same with the state(5) to be set 00:22:18.865 [2024-05-15 11:08:15.465724] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ad30 is same with the state(5) to be set 00:22:18.865 [2024-05-15 11:08:15.465729] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ad30 is same with the state(5) to be set 00:22:18.865 [2024-05-15 11:08:15.465733] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223ad30 is same with the state(5) to be set 00:22:18.865 11:08:15 -- host/failover.sh@45 -- # sleep 3 00:22:22.166 11:08:18 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.166 00:22:22.166 11:08:18 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.427 [2024-05-15 11:08:18.914340] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.427 [2024-05-15 11:08:18.914376] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.427 [2024-05-15 11:08:18.914382] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914387] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914392] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914396] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914401] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914410] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914415] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914420] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914424] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914429] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914433] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914438] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 [2024-05-15 11:08:18.914442] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c110 is same with the state(5) to be set 00:22:22.428 11:08:18 -- host/failover.sh@50 -- # sleep 3 00:22:25.727 11:08:21 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.727 [2024-05-15 11:08:22.086851] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.727 11:08:22 -- host/failover.sh@55 -- # sleep 1 00:22:26.678 11:08:23 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:26.678 [2024-05-15 11:08:23.259301] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259337] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259342] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259348] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259352] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259357] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259362] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259366] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259371] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259375] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259380] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259384] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259389] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259393] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259397] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259406] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259411] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259415] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259419] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259424] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259428] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259432] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259437] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259441] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259446] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259450] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259454] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259459] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259463] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259467] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259472] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259476] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259480] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259486] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259491] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259495] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259499] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259504] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259508] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259512] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259517] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259521] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259527] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259532] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259537] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259542] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259551] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259556] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259561] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259565] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259569] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259574] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259579] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259583] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259588] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259592] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259596] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259600] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259605] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259609] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259613] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259618] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259622] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259627] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259633] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259637] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259642] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259647] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259651] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259658] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259662] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259666] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259671] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259676] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.678 [2024-05-15 11:08:23.259681] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259687] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259693] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259697] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259703] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259708] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259712] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259716] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259721] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259726] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259730] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259735] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259739] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259744] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259749] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259755] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259761] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259766] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259771] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259777] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259782] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259787] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259795] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259800] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259812] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259817] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259822] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259827] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259831] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259837] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259842] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259847] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259852] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259858] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259862] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259867] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259871] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259875] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259880] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259884] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259888] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259893] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259897] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259901] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259906] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259910] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259915] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259920] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259925] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259930] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 [2024-05-15 11:08:23.259934] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c940 is same with the state(5) to be set 00:22:26.679 11:08:23 -- host/failover.sh@59 -- # wait 431812 00:22:33.264 0 00:22:33.264 11:08:29 -- host/failover.sh@61 -- # killprocess 431477 00:22:33.264 11:08:29 -- common/autotest_common.sh@946 -- # '[' -z 431477 ']' 00:22:33.264 11:08:29 -- common/autotest_common.sh@950 -- # kill -0 431477 00:22:33.264 11:08:29 -- common/autotest_common.sh@951 -- # uname 00:22:33.264 11:08:29 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.264 11:08:29 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 431477 00:22:33.264 11:08:29 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:33.264 11:08:29 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:33.264 11:08:29 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 431477' 00:22:33.264 killing process with pid 431477 00:22:33.264 11:08:29 -- common/autotest_common.sh@965 -- # kill 431477 00:22:33.264 11:08:29 -- common/autotest_common.sh@970 -- # wait 431477 00:22:33.264 11:08:29 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.264 [2024-05-15 11:08:12.874421] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:22:33.264 [2024-05-15 11:08:12.874476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431477 ] 00:22:33.264 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.264 [2024-05-15 11:08:12.933234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.264 [2024-05-15 11:08:12.997118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.264 Running I/O for 15 seconds... 00:22:33.264 [2024-05-15 11:08:15.466698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.264 [2024-05-15 11:08:15.466733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:08:15.466750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.264 [2024-05-15 11:08:15.466759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:08:15.466771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.264 [2024-05-15 11:08:15.466779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:08:15.466789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.264 [2024-05-15 11:08:15.466797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:08:15.466807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.264 [2024-05-15 11:08:15.466814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:08:15.466823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.264 [2024-05-15 11:08:15.466831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.264 [2024-05-15 11:08:15.466842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.466986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.466993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.265 [2024-05-15 11:08:15.467400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.265 [2024-05-15 11:08:15.467416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.265 [2024-05-15 11:08:15.467432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.265 [2024-05-15 11:08:15.467448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.265 [2024-05-15 11:08:15.467463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.265 [2024-05-15 11:08:15.467479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.265 [2024-05-15 11:08:15.467495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.265 [2024-05-15 11:08:15.467503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.265 [2024-05-15 11:08:15.467510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.266 [2024-05-15 11:08:15.467788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.467990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.467997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.266 [2024-05-15 11:08:15.468151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.266 [2024-05-15 11:08:15.468161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.267 [2024-05-15 11:08:15.468808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.267 [2024-05-15 11:08:15.468831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.267 [2024-05-15 11:08:15.468838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.267 [2024-05-15 11:08:15.468845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:22:33.268 [2024-05-15 11:08:15.468852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:15.468888] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a72660 was disconnected and freed. reset controller. 00:22:33.268 [2024-05-15 11:08:15.468903] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:33.268 [2024-05-15 11:08:15.468921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:15.468928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:15.468936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:15.468943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:15.468951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:15.468959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:15.468966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:15.468973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:15.468980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.268 [2024-05-15 11:08:15.472541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.268 [2024-05-15 11:08:15.472567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a533c0 (9): Bad file descriptor 00:22:33.268 [2024-05-15 11:08:15.676146] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.268 [2024-05-15 11:08:18.915056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:18.915093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:18.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:18.915131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.268 [2024-05-15 11:08:18.915147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a533c0 is same with the state(5) to be set 00:22:33.268 [2024-05-15 11:08:18.915200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.268 [2024-05-15 11:08:18.915575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.268 [2024-05-15 11:08:18.915651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.268 [2024-05-15 11:08:18.915658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.915979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.915988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.269 [2024-05-15 11:08:18.916157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.269 [2024-05-15 11:08:18.916173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.269 [2024-05-15 11:08:18.916189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.269 [2024-05-15 11:08:18.916207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.269 [2024-05-15 11:08:18.916216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.270 [2024-05-15 11:08:18.916813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.270 [2024-05-15 11:08:18.916878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.270 [2024-05-15 11:08:18.916887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.916894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.916902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.916909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.916920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.916927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.916936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.916943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.916952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.916959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.916968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.916976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.916985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.916992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.917008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.917025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:18.917042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.271 [2024-05-15 11:08:18.917282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.271 [2024-05-15 11:08:18.917308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.271 [2024-05-15 11:08:18.917314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98616 len:8 PRP1 0x0 PRP2 0x0 00:22:33.271 [2024-05-15 11:08:18.917321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:18.917355] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a74690 was disconnected and freed. reset controller. 00:22:33.271 [2024-05-15 11:08:18.917365] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:33.271 [2024-05-15 11:08:18.917373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.271 [2024-05-15 11:08:18.920931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.271 [2024-05-15 11:08:18.920956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a533c0 (9): Bad file descriptor 00:22:33.271 [2024-05-15 11:08:18.969970] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.271 [2024-05-15 11:08:23.261463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.271 [2024-05-15 11:08:23.261697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.271 [2024-05-15 11:08:23.261704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.261990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.261997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.272 [2024-05-15 11:08:23.262370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.272 [2024-05-15 11:08:23.262377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.273 [2024-05-15 11:08:23.262394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.273 [2024-05-15 11:08:23.262409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.273 [2024-05-15 11:08:23.262427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.273 [2024-05-15 11:08:23.262443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.273 [2024-05-15 11:08:23.262944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.273 [2024-05-15 11:08:23.262951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.262960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.274 [2024-05-15 11:08:23.262967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.262975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.274 [2024-05-15 11:08:23.262983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.262992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.274 [2024-05-15 11:08:23.262999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.274 [2024-05-15 11:08:23.263015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.274 [2024-05-15 11:08:23.263031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.274 [2024-05-15 11:08:23.263048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.274 [2024-05-15 11:08:23.263064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.274 [2024-05-15 11:08:23.263533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.274 [2024-05-15 11:08:23.263567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114608 len:8 PRP1 0x0 PRP2 0x0 00:22:33.274 [2024-05-15 11:08:23.263575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.274 [2024-05-15 11:08:23.263591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.274 [2024-05-15 11:08:23.263597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114616 len:8 PRP1 0x0 PRP2 0x0 00:22:33.274 [2024-05-15 11:08:23.263604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.274 [2024-05-15 11:08:23.263611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.274 [2024-05-15 11:08:23.263616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.274 [2024-05-15 11:08:23.263623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:22:33.274 [2024-05-15 11:08:23.263630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.275 [2024-05-15 11:08:23.263638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.275 [2024-05-15 11:08:23.263643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.275 [2024-05-15 11:08:23.263649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114120 len:8 PRP1 0x0 PRP2 0x0 00:22:33.275 [2024-05-15 11:08:23.263656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.275 [2024-05-15 11:08:23.263692] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a74690 was disconnected and freed. reset controller. 00:22:33.275 [2024-05-15 11:08:23.263702] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:33.275 [2024-05-15 11:08:23.263725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.275 [2024-05-15 11:08:23.263733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.275 [2024-05-15 11:08:23.263741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.275 [2024-05-15 11:08:23.263748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.275 [2024-05-15 11:08:23.263756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.275 [2024-05-15 11:08:23.263763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.275 [2024-05-15 11:08:23.263771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.275 [2024-05-15 11:08:23.263778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.275 [2024-05-15 11:08:23.263785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.275 [2024-05-15 11:08:23.263820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a533c0 (9): Bad file descriptor 00:22:33.275 [2024-05-15 11:08:23.267372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.275 [2024-05-15 11:08:23.471450] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.275 00:22:33.275 Latency(us) 00:22:33.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.275 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:33.275 Verification LBA range: start 0x0 length 0x4000 00:22:33.275 NVMe0n1 : 15.01 11503.10 44.93 1105.42 0.00 10123.75 512.00 13161.81 00:22:33.275 =================================================================================================================== 00:22:33.275 Total : 11503.10 44.93 1105.42 0.00 10123.75 512.00 13161.81 00:22:33.275 Received shutdown signal, test time was about 15.000000 seconds 00:22:33.275 00:22:33.275 Latency(us) 00:22:33.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.275 =================================================================================================================== 00:22:33.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.275 11:08:29 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:33.275 11:08:29 -- host/failover.sh@65 -- # count=3 00:22:33.275 11:08:29 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:33.275 11:08:29 -- host/failover.sh@73 -- # bdevperf_pid=434733 00:22:33.275 11:08:29 -- host/failover.sh@75 -- # waitforlisten 434733 /var/tmp/bdevperf.sock 00:22:33.275 11:08:29 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:33.275 11:08:29 -- common/autotest_common.sh@827 -- # '[' -z 434733 ']' 00:22:33.275 11:08:29 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.275 11:08:29 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:33.275 11:08:29 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.275 11:08:29 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:33.275 11:08:29 -- common/autotest_common.sh@10 -- # set +x 00:22:33.847 11:08:30 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:33.847 11:08:30 -- common/autotest_common.sh@860 -- # return 0 00:22:33.847 11:08:30 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:34.108 [2024-05-15 11:08:30.619052] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:34.108 11:08:30 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:34.369 [2024-05-15 11:08:30.779437] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:34.369 11:08:30 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:34.629 NVMe0n1 00:22:34.629 11:08:31 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:35.207 00:22:35.207 11:08:31 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:35.468 00:22:35.468 11:08:31 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:35.468 11:08:31 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:35.468 11:08:32 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:35.728 11:08:32 -- host/failover.sh@87 -- # sleep 3 00:22:39.025 11:08:35 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.025 11:08:35 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:39.025 11:08:35 -- host/failover.sh@90 -- # run_test_pid=435845 00:22:39.025 11:08:35 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.025 11:08:35 -- host/failover.sh@92 -- # wait 435845 00:22:39.966 0 00:22:39.966 11:08:36 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.966 [2024-05-15 11:08:29.710822] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:22:39.966 [2024-05-15 11:08:29.710885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434733 ] 00:22:39.966 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.967 [2024-05-15 11:08:29.770459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.967 [2024-05-15 11:08:29.834215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.967 [2024-05-15 11:08:32.206930] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:39.967 [2024-05-15 11:08:32.206973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.967 [2024-05-15 11:08:32.206985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.967 [2024-05-15 11:08:32.206993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.967 [2024-05-15 11:08:32.207001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.967 [2024-05-15 11:08:32.207009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.967 [2024-05-15 11:08:32.207016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.967 [2024-05-15 11:08:32.207023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.967 [2024-05-15 11:08:32.207030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.967 [2024-05-15 11:08:32.207037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.967 [2024-05-15 11:08:32.207058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.967 [2024-05-15 11:08:32.207073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a73c0 (9): Bad file descriptor 00:22:39.967 [2024-05-15 11:08:32.298748] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.967 Running I/O for 1 seconds... 00:22:39.967 00:22:39.967 Latency(us) 00:22:39.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.967 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:39.967 Verification LBA range: start 0x0 length 0x4000 00:22:39.967 NVMe0n1 : 1.01 11668.98 45.58 0.00 0.00 10916.37 2416.64 12124.16 00:22:39.967 =================================================================================================================== 00:22:39.967 Total : 11668.98 45.58 0.00 0.00 10916.37 2416.64 12124.16 00:22:39.967 11:08:36 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.967 11:08:36 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:40.226 11:08:36 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.226 11:08:36 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.226 11:08:36 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:40.487 11:08:37 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.746 11:08:37 -- host/failover.sh@101 -- # sleep 3 00:22:44.043 11:08:40 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:44.043 11:08:40 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:44.043 11:08:40 -- host/failover.sh@108 -- # killprocess 434733 00:22:44.043 11:08:40 -- common/autotest_common.sh@946 -- # '[' -z 434733 ']' 00:22:44.043 11:08:40 -- common/autotest_common.sh@950 -- # kill -0 434733 00:22:44.043 11:08:40 -- common/autotest_common.sh@951 -- # uname 00:22:44.043 11:08:40 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.043 11:08:40 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 434733 00:22:44.043 11:08:40 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:44.043 11:08:40 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:44.043 11:08:40 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 434733' 00:22:44.043 killing process with pid 434733 00:22:44.043 11:08:40 -- common/autotest_common.sh@965 -- # kill 434733 00:22:44.043 11:08:40 -- common/autotest_common.sh@970 -- # wait 434733 00:22:44.043 11:08:40 -- host/failover.sh@110 -- # sync 00:22:44.043 11:08:40 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.304 11:08:40 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:44.304 11:08:40 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:44.304 11:08:40 -- host/failover.sh@116 -- # nvmftestfini 00:22:44.304 11:08:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:44.304 11:08:40 -- nvmf/common.sh@117 -- # sync 00:22:44.304 11:08:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.304 11:08:40 -- nvmf/common.sh@120 -- # set +e 00:22:44.304 11:08:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.304 11:08:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.304 rmmod nvme_tcp 00:22:44.304 rmmod nvme_fabrics 00:22:44.304 rmmod nvme_keyring 00:22:44.304 11:08:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.304 11:08:40 -- nvmf/common.sh@124 -- # set -e 00:22:44.304 11:08:40 -- nvmf/common.sh@125 -- # return 0 00:22:44.304 11:08:40 -- nvmf/common.sh@478 -- # '[' -n 431078 ']' 00:22:44.304 11:08:40 -- nvmf/common.sh@479 -- # killprocess 431078 00:22:44.304 11:08:40 -- common/autotest_common.sh@946 -- # '[' -z 431078 ']' 00:22:44.304 11:08:40 -- common/autotest_common.sh@950 -- # kill -0 431078 00:22:44.304 11:08:40 -- common/autotest_common.sh@951 -- # uname 00:22:44.304 11:08:40 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.304 11:08:40 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 431078 00:22:44.304 11:08:40 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:44.304 11:08:40 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:44.304 11:08:40 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 431078' 00:22:44.304 killing process with pid 431078 00:22:44.304 11:08:40 -- common/autotest_common.sh@965 -- # kill 431078 00:22:44.304 [2024-05-15 11:08:40.874562] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:44.304 11:08:40 -- common/autotest_common.sh@970 -- # wait 431078 00:22:44.565 11:08:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:44.565 11:08:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:44.565 11:08:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:44.565 11:08:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.565 11:08:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.565 11:08:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.565 11:08:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.565 11:08:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.479 11:08:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:46.479 00:22:46.479 real 0m39.417s 00:22:46.479 user 2m2.640s 00:22:46.479 sys 0m7.798s 00:22:46.480 11:08:43 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:46.480 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:22:46.480 ************************************ 00:22:46.480 END TEST nvmf_failover 00:22:46.480 ************************************ 00:22:46.480 11:08:43 -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:46.480 11:08:43 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:46.480 11:08:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:46.480 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:22:46.741 ************************************ 00:22:46.741 START TEST nvmf_host_discovery 00:22:46.741 ************************************ 00:22:46.741 11:08:43 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:46.741 * Looking for test storage... 00:22:46.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.741 11:08:43 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.741 11:08:43 -- nvmf/common.sh@7 -- # uname -s 00:22:46.741 11:08:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.741 11:08:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.741 11:08:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.741 11:08:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.741 11:08:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.741 11:08:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.741 11:08:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.741 11:08:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.741 11:08:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.741 11:08:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.741 11:08:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.741 11:08:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.741 11:08:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.741 11:08:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.741 11:08:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.741 11:08:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.741 11:08:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.741 11:08:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.741 11:08:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.741 11:08:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.741 11:08:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.741 11:08:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.741 11:08:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.741 11:08:43 -- paths/export.sh@5 -- # export PATH 00:22:46.741 11:08:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.741 11:08:43 -- nvmf/common.sh@47 -- # : 0 00:22:46.741 11:08:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.741 11:08:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.741 11:08:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.741 11:08:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.741 11:08:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.741 11:08:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.741 11:08:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.741 11:08:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.741 11:08:43 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:46.741 11:08:43 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:46.741 11:08:43 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:46.741 11:08:43 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:46.741 11:08:43 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:46.741 11:08:43 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:46.741 11:08:43 -- host/discovery.sh@25 -- # nvmftestinit 00:22:46.741 11:08:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:46.741 11:08:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.741 11:08:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:46.741 11:08:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:46.741 11:08:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:46.741 11:08:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.741 11:08:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.741 11:08:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.741 11:08:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:46.741 11:08:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:46.741 11:08:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.741 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:22:54.888 11:08:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:54.888 11:08:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.888 11:08:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.888 11:08:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.888 11:08:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.888 11:08:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.888 11:08:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.888 11:08:50 -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.888 11:08:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.888 11:08:50 -- nvmf/common.sh@296 -- # e810=() 00:22:54.888 11:08:50 -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.888 11:08:50 -- nvmf/common.sh@297 -- # x722=() 00:22:54.888 11:08:50 -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.888 11:08:50 -- nvmf/common.sh@298 -- # mlx=() 00:22:54.888 11:08:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.888 11:08:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.888 11:08:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.888 11:08:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.888 11:08:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.888 11:08:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.888 11:08:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:54.888 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:54.888 11:08:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.888 11:08:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:54.888 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:54.888 11:08:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.888 11:08:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.888 11:08:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.888 11:08:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:54.888 11:08:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.888 11:08:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:54.888 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:54.888 11:08:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.888 11:08:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.888 11:08:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.888 11:08:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:54.888 11:08:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.888 11:08:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:54.888 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:54.888 11:08:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.888 11:08:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:54.888 11:08:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:54.888 11:08:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:54.888 11:08:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.888 11:08:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.888 11:08:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.888 11:08:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.888 11:08:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.888 11:08:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.888 11:08:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.888 11:08:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.888 11:08:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.888 11:08:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.888 11:08:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.888 11:08:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.888 11:08:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.888 11:08:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.888 11:08:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.888 11:08:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.888 11:08:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.888 11:08:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.888 11:08:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.888 11:08:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.756 ms 00:22:54.888 00:22:54.888 --- 10.0.0.2 ping statistics --- 00:22:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.888 rtt min/avg/max/mdev = 0.756/0.756/0.756/0.000 ms 00:22:54.888 11:08:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:22:54.888 00:22:54.888 --- 10.0.0.1 ping statistics --- 00:22:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.888 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:54.888 11:08:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.888 11:08:50 -- nvmf/common.sh@411 -- # return 0 00:22:54.888 11:08:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:54.888 11:08:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.888 11:08:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:54.888 11:08:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.888 11:08:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:54.888 11:08:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:54.888 11:08:50 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:54.888 11:08:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:54.888 11:08:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:54.888 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 11:08:50 -- nvmf/common.sh@470 -- # nvmfpid=440986 00:22:54.889 11:08:50 -- nvmf/common.sh@471 -- # waitforlisten 440986 00:22:54.889 11:08:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:54.889 11:08:50 -- common/autotest_common.sh@827 -- # '[' -z 440986 ']' 00:22:54.889 11:08:50 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.889 11:08:50 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.889 11:08:50 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.889 11:08:50 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.889 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 [2024-05-15 11:08:50.559342] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:22:54.889 [2024-05-15 11:08:50.559410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.889 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.889 [2024-05-15 11:08:50.647028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.889 [2024-05-15 11:08:50.739311] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.889 [2024-05-15 11:08:50.739372] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.889 [2024-05-15 11:08:50.739381] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.889 [2024-05-15 11:08:50.739387] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.889 [2024-05-15 11:08:50.739394] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.889 [2024-05-15 11:08:50.739419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.889 11:08:51 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.889 11:08:51 -- common/autotest_common.sh@860 -- # return 0 00:22:54.889 11:08:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:54.889 11:08:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.889 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 11:08:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.889 11:08:51 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.889 11:08:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.889 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 [2024-05-15 11:08:51.396944] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.889 11:08:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.889 11:08:51 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:54.889 11:08:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.889 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 [2024-05-15 11:08:51.408923] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:54.889 [2024-05-15 11:08:51.409256] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:54.889 11:08:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.889 11:08:51 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:54.889 11:08:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.889 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 null0 00:22:54.889 11:08:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.889 11:08:51 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:54.889 11:08:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.889 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 null1 00:22:54.889 11:08:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.889 11:08:51 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:54.889 11:08:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.889 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 11:08:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.889 11:08:51 -- host/discovery.sh@45 -- # hostpid=441204 00:22:54.889 11:08:51 -- host/discovery.sh@46 -- # waitforlisten 441204 /tmp/host.sock 00:22:54.889 11:08:51 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:54.889 11:08:51 -- common/autotest_common.sh@827 -- # '[' -z 441204 ']' 00:22:54.889 11:08:51 -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:22:54.889 11:08:51 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.889 11:08:51 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:54.889 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:54.889 11:08:51 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.889 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 [2024-05-15 11:08:51.508498] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:22:54.889 [2024-05-15 11:08:51.508597] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441204 ] 00:22:54.889 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.150 [2024-05-15 11:08:51.573723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.150 [2024-05-15 11:08:51.647407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.722 11:08:52 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.722 11:08:52 -- common/autotest_common.sh@860 -- # return 0 00:22:55.722 11:08:52 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.722 11:08:52 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:55.722 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.722 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.722 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.722 11:08:52 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:55.722 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.722 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.722 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.722 11:08:52 -- host/discovery.sh@72 -- # notify_id=0 00:22:55.722 11:08:52 -- host/discovery.sh@83 -- # get_subsystem_names 00:22:55.722 11:08:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.722 11:08:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.722 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.722 11:08:52 -- host/discovery.sh@59 -- # sort 00:22:55.722 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.722 11:08:52 -- host/discovery.sh@59 -- # xargs 00:22:55.722 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.722 11:08:52 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:55.722 11:08:52 -- host/discovery.sh@84 -- # get_bdev_list 00:22:55.722 11:08:52 -- host/discovery.sh@55 -- # xargs 00:22:55.722 11:08:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.722 11:08:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.722 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.722 11:08:52 -- host/discovery.sh@55 -- # sort 00:22:55.722 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.990 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.990 11:08:52 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:55.990 11:08:52 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:55.990 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.990 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.990 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.990 11:08:52 -- host/discovery.sh@87 -- # get_subsystem_names 00:22:55.990 11:08:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.990 11:08:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.990 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.990 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.990 11:08:52 -- host/discovery.sh@59 -- # sort 00:22:55.990 11:08:52 -- host/discovery.sh@59 -- # xargs 00:22:55.990 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.990 11:08:52 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:55.990 11:08:52 -- host/discovery.sh@88 -- # get_bdev_list 00:22:55.990 11:08:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.990 11:08:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.990 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.991 11:08:52 -- host/discovery.sh@55 -- # sort 00:22:55.991 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.991 11:08:52 -- host/discovery.sh@55 -- # xargs 00:22:55.991 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.991 11:08:52 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:55.991 11:08:52 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:55.991 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.991 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.991 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.991 11:08:52 -- host/discovery.sh@91 -- # get_subsystem_names 00:22:55.991 11:08:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.991 11:08:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:55.991 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.991 11:08:52 -- host/discovery.sh@59 -- # sort 00:22:55.991 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.991 11:08:52 -- host/discovery.sh@59 -- # xargs 00:22:55.991 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.991 11:08:52 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:55.991 11:08:52 -- host/discovery.sh@92 -- # get_bdev_list 00:22:55.991 11:08:52 -- host/discovery.sh@55 -- # sort 00:22:55.991 11:08:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.991 11:08:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:55.991 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.991 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.991 11:08:52 -- host/discovery.sh@55 -- # xargs 00:22:55.991 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.253 11:08:52 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:56.253 11:08:52 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:56.253 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.253 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:56.253 [2024-05-15 11:08:52.652329] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.253 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.253 11:08:52 -- host/discovery.sh@97 -- # get_subsystem_names 00:22:56.253 11:08:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.253 11:08:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.253 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.253 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:56.253 11:08:52 -- host/discovery.sh@59 -- # sort 00:22:56.253 11:08:52 -- host/discovery.sh@59 -- # xargs 00:22:56.253 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.253 11:08:52 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:56.253 11:08:52 -- host/discovery.sh@98 -- # get_bdev_list 00:22:56.253 11:08:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.253 11:08:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.253 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.253 11:08:52 -- host/discovery.sh@55 -- # sort 00:22:56.254 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:56.254 11:08:52 -- host/discovery.sh@55 -- # xargs 00:22:56.254 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.254 11:08:52 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:56.254 11:08:52 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:56.254 11:08:52 -- host/discovery.sh@79 -- # expected_count=0 00:22:56.254 11:08:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.254 11:08:52 -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.254 11:08:52 -- common/autotest_common.sh@911 -- # local max=10 00:22:56.254 11:08:52 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:56.254 11:08:52 -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.254 11:08:52 -- common/autotest_common.sh@913 -- # get_notification_count 00:22:56.254 11:08:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:56.254 11:08:52 -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.254 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.254 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:56.254 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.254 11:08:52 -- host/discovery.sh@74 -- # notification_count=0 00:22:56.254 11:08:52 -- host/discovery.sh@75 -- # notify_id=0 00:22:56.254 11:08:52 -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:56.254 11:08:52 -- common/autotest_common.sh@914 -- # return 0 00:22:56.254 11:08:52 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:56.254 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.254 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:56.254 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.254 11:08:52 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.254 11:08:52 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.254 11:08:52 -- common/autotest_common.sh@911 -- # local max=10 00:22:56.254 11:08:52 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:56.254 11:08:52 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:56.254 11:08:52 -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:56.254 11:08:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.254 11:08:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.254 11:08:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.254 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:22:56.254 11:08:52 -- host/discovery.sh@59 -- # sort 00:22:56.254 11:08:52 -- host/discovery.sh@59 -- # xargs 00:22:56.254 11:08:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.254 11:08:52 -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:22:56.254 11:08:52 -- common/autotest_common.sh@916 -- # sleep 1 00:22:56.825 [2024-05-15 11:08:53.355518] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:56.825 [2024-05-15 11:08:53.355542] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:56.825 [2024-05-15 11:08:53.355558] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.086 [2024-05-15 11:08:53.484949] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:57.086 [2024-05-15 11:08:53.545250] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:57.086 [2024-05-15 11:08:53.545279] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:57.346 11:08:53 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:57.346 11:08:53 -- host/discovery.sh@59 -- # sort 00:22:57.346 11:08:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.346 11:08:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.346 11:08:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.346 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:22:57.346 11:08:53 -- host/discovery.sh@59 -- # xargs 00:22:57.346 11:08:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.346 11:08:53 -- common/autotest_common.sh@914 -- # return 0 00:22:57.346 11:08:53 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:57.346 11:08:53 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:57.346 11:08:53 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.346 11:08:53 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:57.346 11:08:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.346 11:08:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.346 11:08:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.346 11:08:53 -- host/discovery.sh@55 -- # sort 00:22:57.346 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:22:57.346 11:08:53 -- host/discovery.sh@55 -- # xargs 00:22:57.346 11:08:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:57.346 11:08:53 -- common/autotest_common.sh@914 -- # return 0 00:22:57.346 11:08:53 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:57.346 11:08:53 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:57.346 11:08:53 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.346 11:08:53 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:57.346 11:08:53 -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:57.346 11:08:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:57.346 11:08:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:57.346 11:08:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.346 11:08:53 -- host/discovery.sh@63 -- # sort -n 00:22:57.346 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:22:57.347 11:08:53 -- host/discovery.sh@63 -- # xargs 00:22:57.607 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.607 11:08:54 -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:22:57.607 11:08:54 -- common/autotest_common.sh@914 -- # return 0 00:22:57.607 11:08:54 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:57.607 11:08:54 -- host/discovery.sh@79 -- # expected_count=1 00:22:57.607 11:08:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:57.607 11:08:54 -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:57.607 11:08:54 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.607 11:08:54 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.607 11:08:54 -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:57.607 11:08:54 -- common/autotest_common.sh@913 -- # get_notification_count 00:22:57.607 11:08:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:57.607 11:08:54 -- host/discovery.sh@74 -- # jq '. | length' 00:22:57.607 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.607 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.607 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.607 11:08:54 -- host/discovery.sh@74 -- # notification_count=1 00:22:57.607 11:08:54 -- host/discovery.sh@75 -- # notify_id=1 00:22:57.607 11:08:54 -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:57.607 11:08:54 -- common/autotest_common.sh@914 -- # return 0 00:22:57.607 11:08:54 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:57.607 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.607 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.607 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.607 11:08:54 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:57.607 11:08:54 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:57.607 11:08:54 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.607 11:08:54 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.607 11:08:54 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:57.607 11:08:54 -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:57.607 11:08:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.607 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.607 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.607 11:08:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.607 11:08:54 -- host/discovery.sh@55 -- # sort 00:22:57.607 11:08:54 -- host/discovery.sh@55 -- # xargs 00:22:57.868 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:57.868 11:08:54 -- common/autotest_common.sh@914 -- # return 0 00:22:57.868 11:08:54 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:57.868 11:08:54 -- host/discovery.sh@79 -- # expected_count=1 00:22:57.868 11:08:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:57.868 11:08:54 -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:57.868 11:08:54 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.868 11:08:54 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # get_notification_count 00:22:57.868 11:08:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:57.868 11:08:54 -- host/discovery.sh@74 -- # jq '. | length' 00:22:57.868 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.868 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.868 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.868 11:08:54 -- host/discovery.sh@74 -- # notification_count=1 00:22:57.868 11:08:54 -- host/discovery.sh@75 -- # notify_id=2 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:57.868 11:08:54 -- common/autotest_common.sh@914 -- # return 0 00:22:57.868 11:08:54 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:57.868 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.868 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.868 [2024-05-15 11:08:54.372746] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.868 [2024-05-15 11:08:54.373339] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:57.868 [2024-05-15 11:08:54.373365] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.868 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.868 11:08:54 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.868 11:08:54 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:57.868 11:08:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.868 11:08:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.868 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.868 11:08:54 -- host/discovery.sh@59 -- # sort 00:22:57.868 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.868 11:08:54 -- host/discovery.sh@59 -- # xargs 00:22:57.868 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.868 11:08:54 -- common/autotest_common.sh@914 -- # return 0 00:22:57.868 11:08:54 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.868 11:08:54 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:57.868 11:08:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.868 11:08:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.868 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.868 11:08:54 -- host/discovery.sh@55 -- # sort 00:22:57.868 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.868 11:08:54 -- host/discovery.sh@55 -- # xargs 00:22:57.868 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:57.868 11:08:54 -- common/autotest_common.sh@914 -- # return 0 00:22:57.868 11:08:54 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@911 -- # local max=10 00:22:57.868 11:08:54 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:57.868 11:08:54 -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:57.868 11:08:54 -- host/discovery.sh@63 -- # sort -n 00:22:57.868 11:08:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:57.868 11:08:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:57.868 11:08:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.868 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:57.868 11:08:54 -- host/discovery.sh@63 -- # xargs 00:22:57.868 [2024-05-15 11:08:54.502878] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:57.868 11:08:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.128 11:08:54 -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:58.128 11:08:54 -- common/autotest_common.sh@916 -- # sleep 1 00:22:58.128 [2024-05-15 11:08:54.601631] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:58.129 [2024-05-15 11:08:54.601648] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:58.129 [2024-05-15 11:08:54.601653] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:59.073 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:59.073 11:08:55 -- host/discovery.sh@63 -- # sort -n 00:22:59.073 11:08:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:59.073 11:08:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.073 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.073 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 11:08:55 -- host/discovery.sh@63 -- # xargs 00:22:59.073 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:59.073 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.073 11:08:55 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:59.073 11:08:55 -- host/discovery.sh@79 -- # expected_count=0 00:22:59.073 11:08:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.073 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.073 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.073 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # get_notification_count 00:22:59.073 11:08:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.073 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.073 11:08:55 -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.073 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.073 11:08:55 -- host/discovery.sh@74 -- # notification_count=0 00:22:59.073 11:08:55 -- host/discovery.sh@75 -- # notify_id=2 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:59.073 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.073 11:08:55 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:59.073 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.073 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 [2024-05-15 11:08:55.652770] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:59.073 [2024-05-15 11:08:55.652792] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.073 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.073 11:08:55 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.073 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.073 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.073 [2024-05-15 11:08:55.658119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-05-15 11:08:55.658139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 [2024-05-15 11:08:55.658148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-05-15 11:08:55.658156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 [2024-05-15 11:08:55.658164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-05-15 11:08:55.658171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.073 [2024-05-15 11:08:55.658179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-05-15 11:08:55.658186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 [2024-05-15 11:08:55.658193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:59.073 11:08:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.073 11:08:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.073 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.073 11:08:55 -- host/discovery.sh@59 -- # sort 00:22:59.073 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 11:08:55 -- host/discovery.sh@59 -- # xargs 00:22:59.073 [2024-05-15 11:08:55.668132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.073 [2024-05-15 11:08:55.678172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.073 [2024-05-15 11:08:55.678311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.678486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.678498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24dca10 with addr=10.0.0.2, port=4420 00:22:59.073 [2024-05-15 11:08:55.678507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.073 [2024-05-15 11:08:55.678520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.073 [2024-05-15 11:08:55.678531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.073 [2024-05-15 11:08:55.678538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.073 [2024-05-15 11:08:55.678554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.073 [2024-05-15 11:08:55.678567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.073 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.073 [2024-05-15 11:08:55.688227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.073 [2024-05-15 11:08:55.688435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.688807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.688844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24dca10 with addr=10.0.0.2, port=4420 00:22:59.073 [2024-05-15 11:08:55.688855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.073 [2024-05-15 11:08:55.688874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.073 [2024-05-15 11:08:55.688903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.073 [2024-05-15 11:08:55.688911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.073 [2024-05-15 11:08:55.688919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.073 [2024-05-15 11:08:55.688945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.073 [2024-05-15 11:08:55.698280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.073 [2024-05-15 11:08:55.698769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.698989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.699004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24dca10 with addr=10.0.0.2, port=4420 00:22:59.073 [2024-05-15 11:08:55.699014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.073 [2024-05-15 11:08:55.699032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.073 [2024-05-15 11:08:55.699068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.073 [2024-05-15 11:08:55.699077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.073 [2024-05-15 11:08:55.699085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.073 [2024-05-15 11:08:55.699104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.073 [2024-05-15 11:08:55.708339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.073 [2024-05-15 11:08:55.708588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.708893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.073 [2024-05-15 11:08:55.708904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24dca10 with addr=10.0.0.2, port=4420 00:22:59.073 [2024-05-15 11:08:55.708911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.073 [2024-05-15 11:08:55.708923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.073 [2024-05-15 11:08:55.708933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.073 [2024-05-15 11:08:55.708939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.073 [2024-05-15 11:08:55.708946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.073 [2024-05-15 11:08:55.708957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.073 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.073 11:08:55 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.073 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.073 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.073 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:59.073 11:08:55 -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:59.073 [2024-05-15 11:08:55.718395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.073 [2024-05-15 11:08:55.718529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.074 [2024-05-15 11:08:55.718751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.074 [2024-05-15 11:08:55.718762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24dca10 with addr=10.0.0.2, port=4420 00:22:59.074 [2024-05-15 11:08:55.718770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.074 [2024-05-15 11:08:55.718781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.074 [2024-05-15 11:08:55.718790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.074 [2024-05-15 11:08:55.718797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.074 [2024-05-15 11:08:55.718804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.074 [2024-05-15 11:08:55.718814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.074 11:08:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.074 11:08:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.074 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.074 11:08:55 -- host/discovery.sh@55 -- # sort 00:22:59.074 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.074 11:08:55 -- host/discovery.sh@55 -- # xargs 00:22:59.334 [2024-05-15 11:08:55.728446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.334 [2024-05-15 11:08:55.728786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.334 [2024-05-15 11:08:55.729104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.334 [2024-05-15 11:08:55.729114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24dca10 with addr=10.0.0.2, port=4420 00:22:59.334 [2024-05-15 11:08:55.729128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.334 [2024-05-15 11:08:55.729139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.334 [2024-05-15 11:08:55.729174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.334 [2024-05-15 11:08:55.729182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.334 [2024-05-15 11:08:55.729189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.334 [2024-05-15 11:08:55.729200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.334 [2024-05-15 11:08:55.738502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.334 [2024-05-15 11:08:55.738692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.334 [2024-05-15 11:08:55.738998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.334 [2024-05-15 11:08:55.739008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24dca10 with addr=10.0.0.2, port=4420 00:22:59.334 [2024-05-15 11:08:55.739015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dca10 is same with the state(5) to be set 00:22:59.334 [2024-05-15 11:08:55.739026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24dca10 (9): Bad file descriptor 00:22:59.334 [2024-05-15 11:08:55.739037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.334 [2024-05-15 11:08:55.739043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.334 [2024-05-15 11:08:55.739050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.334 [2024-05-15 11:08:55.739060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.334 [2024-05-15 11:08:55.740203] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:59.334 [2024-05-15 11:08:55.740221] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:59.334 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.334 11:08:55 -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:59.334 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.334 11:08:55 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:59.334 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:59.334 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.334 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:59.335 11:08:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:59.335 11:08:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.335 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.335 11:08:55 -- host/discovery.sh@63 -- # sort -n 00:22:59.335 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.335 11:08:55 -- host/discovery.sh@63 -- # xargs 00:22:59.335 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.335 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.335 11:08:55 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:59.335 11:08:55 -- host/discovery.sh@79 -- # expected_count=0 00:22:59.335 11:08:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.335 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.335 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.335 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # get_notification_count 00:22:59.335 11:08:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.335 11:08:55 -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.335 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.335 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.335 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.335 11:08:55 -- host/discovery.sh@74 -- # notification_count=0 00:22:59.335 11:08:55 -- host/discovery.sh@75 -- # notify_id=2 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:59.335 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.335 11:08:55 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:59.335 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.335 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.335 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.335 11:08:55 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:59.335 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:59.335 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.335 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:59.335 11:08:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.335 11:08:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.335 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.335 11:08:55 -- host/discovery.sh@59 -- # sort 00:22:59.335 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.335 11:08:55 -- host/discovery.sh@59 -- # xargs 00:22:59.335 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:59.335 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.335 11:08:55 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:59.335 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:59.335 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.335 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:59.335 11:08:55 -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:59.335 11:08:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.335 11:08:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.335 11:08:55 -- host/discovery.sh@55 -- # sort 00:22:59.335 11:08:55 -- host/discovery.sh@55 -- # xargs 00:22:59.335 11:08:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.335 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:59.335 11:08:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.596 11:08:55 -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:59.596 11:08:55 -- common/autotest_common.sh@914 -- # return 0 00:22:59.596 11:08:55 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:59.596 11:08:55 -- host/discovery.sh@79 -- # expected_count=2 00:22:59.596 11:08:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.596 11:08:55 -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.596 11:08:55 -- common/autotest_common.sh@911 -- # local max=10 00:22:59.596 11:08:55 -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:59.596 11:08:55 -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.596 11:08:55 -- common/autotest_common.sh@913 -- # get_notification_count 00:22:59.596 11:08:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.596 11:08:56 -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.596 11:08:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.596 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:22:59.596 11:08:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.596 11:08:56 -- host/discovery.sh@74 -- # notification_count=2 00:22:59.596 11:08:56 -- host/discovery.sh@75 -- # notify_id=4 00:22:59.596 11:08:56 -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:59.596 11:08:56 -- common/autotest_common.sh@914 -- # return 0 00:22:59.596 11:08:56 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.596 11:08:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.596 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:23:00.537 [2024-05-15 11:08:57.065286] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:00.537 [2024-05-15 11:08:57.065303] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:00.537 [2024-05-15 11:08:57.065316] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.537 [2024-05-15 11:08:57.154609] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:00.799 [2024-05-15 11:08:57.259583] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:00.799 [2024-05-15 11:08:57.259613] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:00.799 11:08:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.799 11:08:57 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.799 11:08:57 -- common/autotest_common.sh@648 -- # local es=0 00:23:00.799 11:08:57 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.799 11:08:57 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:00.799 11:08:57 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.799 11:08:57 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:00.799 11:08:57 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.799 11:08:57 -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.799 11:08:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.799 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:00.799 request: 00:23:00.799 { 00:23:00.799 "name": "nvme", 00:23:00.799 "trtype": "tcp", 00:23:00.799 "traddr": "10.0.0.2", 00:23:00.799 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:00.799 "adrfam": "ipv4", 00:23:00.799 "trsvcid": "8009", 00:23:00.799 "wait_for_attach": true, 00:23:00.799 "method": "bdev_nvme_start_discovery", 00:23:00.799 "req_id": 1 00:23:00.799 } 00:23:00.799 Got JSON-RPC error response 00:23:00.799 response: 00:23:00.799 { 00:23:00.799 "code": -17, 00:23:00.799 "message": "File exists" 00:23:00.799 } 00:23:00.799 11:08:57 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:00.799 11:08:57 -- common/autotest_common.sh@651 -- # es=1 00:23:00.799 11:08:57 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.799 11:08:57 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.799 11:08:57 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.799 11:08:57 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:00.799 11:08:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # sort 00:23:00.799 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # xargs 00:23:00.799 11:08:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.799 11:08:57 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:00.799 11:08:57 -- host/discovery.sh@146 -- # get_bdev_list 00:23:00.799 11:08:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.799 11:08:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.799 11:08:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.799 11:08:57 -- host/discovery.sh@55 -- # sort 00:23:00.799 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:00.799 11:08:57 -- host/discovery.sh@55 -- # xargs 00:23:00.799 11:08:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.799 11:08:57 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:00.799 11:08:57 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.799 11:08:57 -- common/autotest_common.sh@648 -- # local es=0 00:23:00.799 11:08:57 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.799 11:08:57 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:00.799 11:08:57 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.799 11:08:57 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:00.799 11:08:57 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.799 11:08:57 -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.799 11:08:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.799 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:00.799 request: 00:23:00.799 { 00:23:00.799 "name": "nvme_second", 00:23:00.799 "trtype": "tcp", 00:23:00.799 "traddr": "10.0.0.2", 00:23:00.799 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:00.799 "adrfam": "ipv4", 00:23:00.799 "trsvcid": "8009", 00:23:00.799 "wait_for_attach": true, 00:23:00.799 "method": "bdev_nvme_start_discovery", 00:23:00.799 "req_id": 1 00:23:00.799 } 00:23:00.799 Got JSON-RPC error response 00:23:00.799 response: 00:23:00.799 { 00:23:00.799 "code": -17, 00:23:00.799 "message": "File exists" 00:23:00.799 } 00:23:00.799 11:08:57 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:00.799 11:08:57 -- common/autotest_common.sh@651 -- # es=1 00:23:00.799 11:08:57 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.799 11:08:57 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.799 11:08:57 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.799 11:08:57 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:00.799 11:08:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # sort 00:23:00.799 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:00.799 11:08:57 -- host/discovery.sh@67 -- # xargs 00:23:00.799 11:08:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.061 11:08:57 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:01.061 11:08:57 -- host/discovery.sh@152 -- # get_bdev_list 00:23:01.061 11:08:57 -- host/discovery.sh@55 -- # xargs 00:23:01.061 11:08:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.061 11:08:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.061 11:08:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.061 11:08:57 -- host/discovery.sh@55 -- # sort 00:23:01.061 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:01.061 11:08:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.061 11:08:57 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:01.061 11:08:57 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:01.061 11:08:57 -- common/autotest_common.sh@648 -- # local es=0 00:23:01.061 11:08:57 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:01.061 11:08:57 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:01.061 11:08:57 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.061 11:08:57 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:01.061 11:08:57 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.061 11:08:57 -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:01.061 11:08:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.061 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:02.003 [2024-05-15 11:08:58.532370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.003 [2024-05-15 11:08:58.532712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.003 [2024-05-15 11:08:58.532725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d8ac0 with addr=10.0.0.2, port=8010 00:23:02.003 [2024-05-15 11:08:58.532739] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:02.003 [2024-05-15 11:08:58.532748] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:02.003 [2024-05-15 11:08:58.532756] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:02.948 [2024-05-15 11:08:59.534655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.948 [2024-05-15 11:08:59.534992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.948 [2024-05-15 11:08:59.535006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d8ac0 with addr=10.0.0.2, port=8010 00:23:02.948 [2024-05-15 11:08:59.535018] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:02.948 [2024-05-15 11:08:59.535025] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:02.948 [2024-05-15 11:08:59.535032] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:03.890 [2024-05-15 11:09:00.536691] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:03.890 request: 00:23:03.890 { 00:23:03.890 "name": "nvme_second", 00:23:03.890 "trtype": "tcp", 00:23:03.890 "traddr": "10.0.0.2", 00:23:03.890 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:03.890 "adrfam": "ipv4", 00:23:03.890 "trsvcid": "8010", 00:23:03.890 "attach_timeout_ms": 3000, 00:23:03.890 "method": "bdev_nvme_start_discovery", 00:23:03.890 "req_id": 1 00:23:03.890 } 00:23:03.890 Got JSON-RPC error response 00:23:03.890 response: 00:23:03.890 { 00:23:03.890 "code": -110, 00:23:03.890 "message": "Connection timed out" 00:23:03.890 } 00:23:03.890 11:09:00 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:03.890 11:09:00 -- common/autotest_common.sh@651 -- # es=1 00:23:04.152 11:09:00 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:04.152 11:09:00 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:04.152 11:09:00 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:04.152 11:09:00 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:04.152 11:09:00 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:04.152 11:09:00 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:04.152 11:09:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.152 11:09:00 -- host/discovery.sh@67 -- # sort 00:23:04.152 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:23:04.152 11:09:00 -- host/discovery.sh@67 -- # xargs 00:23:04.152 11:09:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.152 11:09:00 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:04.152 11:09:00 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:04.152 11:09:00 -- host/discovery.sh@161 -- # kill 441204 00:23:04.152 11:09:00 -- host/discovery.sh@162 -- # nvmftestfini 00:23:04.152 11:09:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:04.152 11:09:00 -- nvmf/common.sh@117 -- # sync 00:23:04.152 11:09:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.152 11:09:00 -- nvmf/common.sh@120 -- # set +e 00:23:04.152 11:09:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.152 11:09:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.152 rmmod nvme_tcp 00:23:04.152 rmmod nvme_fabrics 00:23:04.152 rmmod nvme_keyring 00:23:04.152 11:09:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.153 11:09:00 -- nvmf/common.sh@124 -- # set -e 00:23:04.153 11:09:00 -- nvmf/common.sh@125 -- # return 0 00:23:04.153 11:09:00 -- nvmf/common.sh@478 -- # '[' -n 440986 ']' 00:23:04.153 11:09:00 -- nvmf/common.sh@479 -- # killprocess 440986 00:23:04.153 11:09:00 -- common/autotest_common.sh@946 -- # '[' -z 440986 ']' 00:23:04.153 11:09:00 -- common/autotest_common.sh@950 -- # kill -0 440986 00:23:04.153 11:09:00 -- common/autotest_common.sh@951 -- # uname 00:23:04.153 11:09:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.153 11:09:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 440986 00:23:04.153 11:09:00 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:04.153 11:09:00 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:04.153 11:09:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 440986' 00:23:04.153 killing process with pid 440986 00:23:04.153 11:09:00 -- common/autotest_common.sh@965 -- # kill 440986 00:23:04.153 [2024-05-15 11:09:00.741180] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:04.153 11:09:00 -- common/autotest_common.sh@970 -- # wait 440986 00:23:04.414 11:09:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:04.414 11:09:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:04.414 11:09:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:04.414 11:09:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.414 11:09:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.414 11:09:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.414 11:09:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.414 11:09:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.327 11:09:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.327 00:23:06.327 real 0m19.770s 00:23:06.327 user 0m23.217s 00:23:06.327 sys 0m6.814s 00:23:06.327 11:09:02 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:06.327 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 ************************************ 00:23:06.327 END TEST nvmf_host_discovery 00:23:06.327 ************************************ 00:23:06.327 11:09:02 -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:06.327 11:09:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:06.327 11:09:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:06.327 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:23:06.588 ************************************ 00:23:06.588 START TEST nvmf_host_multipath_status 00:23:06.588 ************************************ 00:23:06.588 11:09:03 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:06.588 * Looking for test storage... 00:23:06.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.588 11:09:03 -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.588 11:09:03 -- nvmf/common.sh@7 -- # uname -s 00:23:06.588 11:09:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.588 11:09:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.588 11:09:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.588 11:09:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.588 11:09:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.588 11:09:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.588 11:09:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.588 11:09:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.588 11:09:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.588 11:09:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.588 11:09:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.588 11:09:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.588 11:09:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.588 11:09:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.588 11:09:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.588 11:09:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.588 11:09:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.588 11:09:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.588 11:09:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.588 11:09:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.588 11:09:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.588 11:09:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.588 11:09:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.589 11:09:03 -- paths/export.sh@5 -- # export PATH 00:23:06.589 11:09:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.589 11:09:03 -- nvmf/common.sh@47 -- # : 0 00:23:06.589 11:09:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.589 11:09:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.589 11:09:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.589 11:09:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.589 11:09:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.589 11:09:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.589 11:09:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.589 11:09:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.589 11:09:03 -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:06.589 11:09:03 -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:06.589 11:09:03 -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:06.589 11:09:03 -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:06.589 11:09:03 -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.589 11:09:03 -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:06.589 11:09:03 -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:06.589 11:09:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:06.589 11:09:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.589 11:09:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:06.589 11:09:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:06.589 11:09:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:06.589 11:09:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.589 11:09:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.589 11:09:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.589 11:09:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:06.589 11:09:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:06.589 11:09:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.589 11:09:03 -- common/autotest_common.sh@10 -- # set +x 00:23:14.734 11:09:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:14.734 11:09:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.734 11:09:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.734 11:09:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.734 11:09:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.734 11:09:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.734 11:09:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.734 11:09:09 -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.734 11:09:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.734 11:09:09 -- nvmf/common.sh@296 -- # e810=() 00:23:14.734 11:09:09 -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.734 11:09:09 -- nvmf/common.sh@297 -- # x722=() 00:23:14.734 11:09:09 -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.734 11:09:09 -- nvmf/common.sh@298 -- # mlx=() 00:23:14.734 11:09:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.734 11:09:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.734 11:09:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.734 11:09:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.734 11:09:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.734 11:09:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.734 11:09:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:14.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:14.734 11:09:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.734 11:09:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:14.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:14.734 11:09:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.734 11:09:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.734 11:09:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.734 11:09:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.734 11:09:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:14.734 11:09:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.734 11:09:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:14.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:14.734 11:09:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.734 11:09:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.734 11:09:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.734 11:09:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:14.734 11:09:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.734 11:09:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:14.734 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:14.734 11:09:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.734 11:09:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:14.734 11:09:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:14.734 11:09:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:14.734 11:09:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:14.734 11:09:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:14.734 11:09:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.734 11:09:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.734 11:09:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.734 11:09:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.734 11:09:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.734 11:09:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.734 11:09:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.734 11:09:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.734 11:09:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.734 11:09:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.734 11:09:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.734 11:09:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.734 11:09:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.734 11:09:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.734 11:09:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.734 11:09:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:14.734 11:09:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.734 11:09:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.734 11:09:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.734 11:09:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:14.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:23:14.734 00:23:14.734 --- 10.0.0.2 ping statistics --- 00:23:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.734 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:23:14.734 11:09:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:23:14.734 00:23:14.734 --- 10.0.0.1 ping statistics --- 00:23:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.734 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:14.734 11:09:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.734 11:09:10 -- nvmf/common.sh@411 -- # return 0 00:23:14.734 11:09:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:14.734 11:09:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.734 11:09:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:14.734 11:09:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:14.734 11:09:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.734 11:09:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:14.734 11:09:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:14.734 11:09:10 -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:14.734 11:09:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:14.734 11:09:10 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:14.734 11:09:10 -- common/autotest_common.sh@10 -- # set +x 00:23:14.734 11:09:10 -- nvmf/common.sh@470 -- # nvmfpid=447937 00:23:14.734 11:09:10 -- nvmf/common.sh@471 -- # waitforlisten 447937 00:23:14.734 11:09:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:14.734 11:09:10 -- common/autotest_common.sh@827 -- # '[' -z 447937 ']' 00:23:14.734 11:09:10 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.734 11:09:10 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:14.734 11:09:10 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.734 11:09:10 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:14.734 11:09:10 -- common/autotest_common.sh@10 -- # set +x 00:23:14.734 [2024-05-15 11:09:10.402198] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:23:14.735 [2024-05-15 11:09:10.402262] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.735 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.735 [2024-05-15 11:09:10.471264] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:14.735 [2024-05-15 11:09:10.546632] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.735 [2024-05-15 11:09:10.546667] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.735 [2024-05-15 11:09:10.546675] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.735 [2024-05-15 11:09:10.546681] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.735 [2024-05-15 11:09:10.546687] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.735 [2024-05-15 11:09:10.546827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.735 [2024-05-15 11:09:10.546827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.735 11:09:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.735 11:09:11 -- common/autotest_common.sh@860 -- # return 0 00:23:14.735 11:09:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:14.735 11:09:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.735 11:09:11 -- common/autotest_common.sh@10 -- # set +x 00:23:14.735 11:09:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.735 11:09:11 -- host/multipath_status.sh@34 -- # nvmfapp_pid=447937 00:23:14.735 11:09:11 -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:14.735 [2024-05-15 11:09:11.363856] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.735 11:09:11 -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:14.995 Malloc0 00:23:14.995 11:09:11 -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:15.255 11:09:11 -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.255 11:09:11 -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.514 [2024-05-15 11:09:12.008131] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:15.514 [2024-05-15 11:09:12.008363] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.514 11:09:12 -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:15.514 [2024-05-15 11:09:12.164676] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.774 11:09:12 -- host/multipath_status.sh@45 -- # bdevperf_pid=448299 00:23:15.774 11:09:12 -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.774 11:09:12 -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:15.774 11:09:12 -- host/multipath_status.sh@48 -- # waitforlisten 448299 /var/tmp/bdevperf.sock 00:23:15.774 11:09:12 -- common/autotest_common.sh@827 -- # '[' -z 448299 ']' 00:23:15.774 11:09:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.774 11:09:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.774 11:09:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.774 11:09:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.774 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:23:16.714 11:09:12 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.714 11:09:12 -- common/autotest_common.sh@860 -- # return 0 00:23:16.714 11:09:12 -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:16.714 11:09:13 -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:16.974 Nvme0n1 00:23:16.974 11:09:13 -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:17.234 Nvme0n1 00:23:17.234 11:09:13 -- host/multipath_status.sh@78 -- # sleep 2 00:23:17.234 11:09:13 -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:19.777 11:09:15 -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:19.777 11:09:15 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:19.777 11:09:16 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:19.777 11:09:16 -- host/multipath_status.sh@91 -- # sleep 1 00:23:20.724 11:09:17 -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:20.724 11:09:17 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:20.724 11:09:17 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.724 11:09:17 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.984 11:09:17 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.984 11:09:17 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:20.984 11:09:17 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.984 11:09:17 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.984 11:09:17 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.984 11:09:17 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.985 11:09:17 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.985 11:09:17 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:21.245 11:09:17 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.245 11:09:17 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:21.245 11:09:17 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.245 11:09:17 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:21.506 11:09:17 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.506 11:09:17 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:21.506 11:09:17 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.506 11:09:17 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:21.506 11:09:18 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.506 11:09:18 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:21.506 11:09:18 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.506 11:09:18 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.766 11:09:18 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.766 11:09:18 -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:21.766 11:09:18 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:22.027 11:09:18 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:22.027 11:09:18 -- host/multipath_status.sh@95 -- # sleep 1 00:23:22.967 11:09:19 -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:22.967 11:09:19 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:22.967 11:09:19 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.967 11:09:19 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:23.228 11:09:19 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.228 11:09:19 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:23.228 11:09:19 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.228 11:09:19 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:23.488 11:09:19 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.488 11:09:19 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:23.488 11:09:19 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.488 11:09:19 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:23.488 11:09:20 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.488 11:09:20 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:23.488 11:09:20 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.488 11:09:20 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.749 11:09:20 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.749 11:09:20 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.749 11:09:20 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.749 11:09:20 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:24.010 11:09:20 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.010 11:09:20 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:24.010 11:09:20 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.010 11:09:20 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:24.010 11:09:20 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.010 11:09:20 -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:24.010 11:09:20 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:24.271 11:09:20 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:24.532 11:09:20 -- host/multipath_status.sh@101 -- # sleep 1 00:23:25.472 11:09:21 -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:25.472 11:09:21 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:25.472 11:09:21 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.473 11:09:21 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:25.473 11:09:22 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.473 11:09:22 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:25.473 11:09:22 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.473 11:09:22 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:25.732 11:09:22 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.732 11:09:22 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:25.732 11:09:22 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.732 11:09:22 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.991 11:09:22 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.991 11:09:22 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.991 11:09:22 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.991 11:09:22 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.991 11:09:22 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.991 11:09:22 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:25.991 11:09:22 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.991 11:09:22 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.251 11:09:22 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.251 11:09:22 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:26.251 11:09:22 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.251 11:09:22 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:26.511 11:09:22 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.511 11:09:22 -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:26.511 11:09:22 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.511 11:09:23 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:26.770 11:09:23 -- host/multipath_status.sh@105 -- # sleep 1 00:23:27.711 11:09:24 -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:27.711 11:09:24 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:27.711 11:09:24 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.711 11:09:24 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.971 11:09:24 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.971 11:09:24 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:27.971 11:09:24 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.971 11:09:24 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:28.232 11:09:24 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.232 11:09:24 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:28.232 11:09:24 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.232 11:09:24 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:28.232 11:09:24 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.232 11:09:24 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:28.232 11:09:24 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.232 11:09:24 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:28.492 11:09:24 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.492 11:09:24 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:28.492 11:09:24 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.492 11:09:24 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:28.493 11:09:25 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.493 11:09:25 -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:28.493 11:09:25 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.493 11:09:25 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.753 11:09:25 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.753 11:09:25 -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:28.753 11:09:25 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:29.014 11:09:25 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:29.014 11:09:25 -- host/multipath_status.sh@109 -- # sleep 1 00:23:30.398 11:09:26 -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:30.398 11:09:26 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.398 11:09:26 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.398 11:09:26 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.398 11:09:26 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.659 11:09:27 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.659 11:09:27 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.659 11:09:27 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.659 11:09:27 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.659 11:09:27 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.659 11:09:27 -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:30.659 11:09:27 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.659 11:09:27 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.920 11:09:27 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.920 11:09:27 -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:30.920 11:09:27 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.920 11:09:27 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.180 11:09:27 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.180 11:09:27 -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:31.180 11:09:27 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:31.180 11:09:27 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:31.442 11:09:27 -- host/multipath_status.sh@113 -- # sleep 1 00:23:32.383 11:09:28 -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:32.383 11:09:28 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:32.383 11:09:28 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.383 11:09:28 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:32.643 11:09:29 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:32.643 11:09:29 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:32.643 11:09:29 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.643 11:09:29 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.643 11:09:29 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.643 11:09:29 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.643 11:09:29 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.643 11:09:29 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:32.903 11:09:29 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.903 11:09:29 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:32.903 11:09:29 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.903 11:09:29 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.163 11:09:29 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.163 11:09:29 -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:33.163 11:09:29 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.163 11:09:29 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.163 11:09:29 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.163 11:09:29 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:33.163 11:09:29 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.163 11:09:29 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:33.424 11:09:29 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.424 11:09:29 -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:33.685 11:09:30 -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:33.685 11:09:30 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:33.685 11:09:30 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:33.945 11:09:30 -- host/multipath_status.sh@120 -- # sleep 1 00:23:34.886 11:09:31 -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:34.886 11:09:31 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:34.886 11:09:31 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.886 11:09:31 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.146 11:09:31 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.146 11:09:31 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:35.146 11:09:31 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.146 11:09:31 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.407 11:09:31 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.407 11:09:31 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.407 11:09:31 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.407 11:09:31 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.407 11:09:31 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.407 11:09:31 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.407 11:09:31 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.407 11:09:31 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.667 11:09:32 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.667 11:09:32 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.667 11:09:32 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.667 11:09:32 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.667 11:09:32 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.667 11:09:32 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.667 11:09:32 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.667 11:09:32 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.927 11:09:32 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.927 11:09:32 -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:35.927 11:09:32 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.188 11:09:32 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:36.188 11:09:32 -- host/multipath_status.sh@124 -- # sleep 1 00:23:37.570 11:09:33 -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:37.571 11:09:33 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:37.571 11:09:33 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.571 11:09:33 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.571 11:09:33 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.571 11:09:33 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:37.571 11:09:33 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.571 11:09:33 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.571 11:09:34 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.571 11:09:34 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.571 11:09:34 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.571 11:09:34 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:37.831 11:09:34 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.831 11:09:34 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:37.831 11:09:34 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.831 11:09:34 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.831 11:09:34 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.831 11:09:34 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.831 11:09:34 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.831 11:09:34 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.092 11:09:34 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.092 11:09:34 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:38.092 11:09:34 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.092 11:09:34 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:38.353 11:09:34 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.353 11:09:34 -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:38.353 11:09:34 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:38.353 11:09:34 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:38.614 11:09:35 -- host/multipath_status.sh@130 -- # sleep 1 00:23:39.555 11:09:36 -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:39.555 11:09:36 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:39.555 11:09:36 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.555 11:09:36 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.816 11:09:36 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.816 11:09:36 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:39.816 11:09:36 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.816 11:09:36 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.816 11:09:36 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.816 11:09:36 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.816 11:09:36 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.816 11:09:36 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.077 11:09:36 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.077 11:09:36 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.077 11:09:36 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.077 11:09:36 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.337 11:09:36 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.338 11:09:36 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:40.338 11:09:36 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.338 11:09:36 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.338 11:09:36 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.338 11:09:36 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:40.338 11:09:36 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.338 11:09:36 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.599 11:09:37 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.599 11:09:37 -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:40.599 11:09:37 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.860 11:09:37 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:40.860 11:09:37 -- host/multipath_status.sh@134 -- # sleep 1 00:23:42.244 11:09:38 -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:42.244 11:09:38 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.244 11:09:38 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.244 11:09:38 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.244 11:09:38 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.512 11:09:38 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.512 11:09:38 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.512 11:09:38 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.512 11:09:38 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.776 11:09:39 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.776 11:09:39 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.776 11:09:39 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.776 11:09:39 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.776 11:09:39 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.776 11:09:39 -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:42.776 11:09:39 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.777 11:09:39 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.041 11:09:39 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.041 11:09:39 -- host/multipath_status.sh@137 -- # killprocess 448299 00:23:43.041 11:09:39 -- common/autotest_common.sh@946 -- # '[' -z 448299 ']' 00:23:43.041 11:09:39 -- common/autotest_common.sh@950 -- # kill -0 448299 00:23:43.041 11:09:39 -- common/autotest_common.sh@951 -- # uname 00:23:43.041 11:09:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:43.041 11:09:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 448299 00:23:43.041 11:09:39 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:43.041 11:09:39 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:43.041 11:09:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 448299' 00:23:43.041 killing process with pid 448299 00:23:43.041 11:09:39 -- common/autotest_common.sh@965 -- # kill 448299 00:23:43.041 11:09:39 -- common/autotest_common.sh@970 -- # wait 448299 00:23:43.041 Connection closed with partial response: 00:23:43.041 00:23:43.041 00:23:43.041 11:09:39 -- host/multipath_status.sh@139 -- # wait 448299 00:23:43.041 11:09:39 -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.041 [2024-05-15 11:09:12.232574] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:23:43.042 [2024-05-15 11:09:12.232677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448299 ] 00:23:43.042 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.042 [2024-05-15 11:09:12.286445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.042 [2024-05-15 11:09:12.337940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.042 Running I/O for 90 seconds... 00:23:43.042 [2024-05-15 11:09:25.450689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.042 [2024-05-15 11:09:25.450724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.042 [2024-05-15 11:09:25.451749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:43.042 [2024-05-15 11:09:25.451759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.451825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.451928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.451933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.452182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.452201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.452219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.452236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.452254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.452271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.043 [2024-05-15 11:09:25.452289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.043 [2024-05-15 11:09:25.452539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:43.043 [2024-05-15 11:09:25.452556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.452992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.452999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.044 [2024-05-15 11:09:25.453691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:43.044 [2024-05-15 11:09:25.453706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:25.453978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:25.453984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:37.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:37.466565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:37.466582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:37.466597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:37.466613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.045 [2024-05-15 11:09:37.466632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.045 [2024-05-15 11:09:37.466648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.045 [2024-05-15 11:09:37.466663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.466673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.045 [2024-05-15 11:09:37.466678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.467123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.045 [2024-05-15 11:09:37.467133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.467145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.045 [2024-05-15 11:09:37.467151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.467161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.045 [2024-05-15 11:09:37.467166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:43.045 [2024-05-15 11:09:37.467532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.045 [2024-05-15 11:09:37.467542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:43.045 Received shutdown signal, test time was about 25.531389 seconds 00:23:43.045 00:23:43.045 Latency(us) 00:23:43.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.045 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.045 Verification LBA range: start 0x0 length 0x4000 00:23:43.045 Nvme0n1 : 25.53 11157.28 43.58 0.00 0.00 11454.65 264.53 3019898.88 00:23:43.045 =================================================================================================================== 00:23:43.045 Total : 11157.28 43.58 0.00 0.00 11454.65 264.53 3019898.88 00:23:43.045 11:09:39 -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.306 11:09:39 -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:43.306 11:09:39 -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.306 11:09:39 -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:43.306 11:09:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:43.306 11:09:39 -- nvmf/common.sh@117 -- # sync 00:23:43.306 11:09:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.306 11:09:39 -- nvmf/common.sh@120 -- # set +e 00:23:43.306 11:09:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.306 11:09:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.306 rmmod nvme_tcp 00:23:43.306 rmmod nvme_fabrics 00:23:43.306 rmmod nvme_keyring 00:23:43.306 11:09:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.306 11:09:39 -- nvmf/common.sh@124 -- # set -e 00:23:43.306 11:09:39 -- nvmf/common.sh@125 -- # return 0 00:23:43.306 11:09:39 -- nvmf/common.sh@478 -- # '[' -n 447937 ']' 00:23:43.306 11:09:39 -- nvmf/common.sh@479 -- # killprocess 447937 00:23:43.306 11:09:39 -- common/autotest_common.sh@946 -- # '[' -z 447937 ']' 00:23:43.306 11:09:39 -- common/autotest_common.sh@950 -- # kill -0 447937 00:23:43.306 11:09:39 -- common/autotest_common.sh@951 -- # uname 00:23:43.306 11:09:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:43.306 11:09:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 447937 00:23:43.566 11:09:39 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:43.566 11:09:39 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:43.566 11:09:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 447937' 00:23:43.566 killing process with pid 447937 00:23:43.566 11:09:39 -- common/autotest_common.sh@965 -- # kill 447937 00:23:43.566 [2024-05-15 11:09:39.960641] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:43.566 11:09:39 -- common/autotest_common.sh@970 -- # wait 447937 00:23:43.566 11:09:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:43.566 11:09:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:43.566 11:09:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:43.566 11:09:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.566 11:09:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.566 11:09:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.566 11:09:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.566 11:09:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.109 11:09:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.109 00:23:46.109 real 0m39.170s 00:23:46.109 user 1m40.875s 00:23:46.109 sys 0m10.714s 00:23:46.109 11:09:42 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:46.109 11:09:42 -- common/autotest_common.sh@10 -- # set +x 00:23:46.109 ************************************ 00:23:46.109 END TEST nvmf_host_multipath_status 00:23:46.109 ************************************ 00:23:46.109 11:09:42 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:46.109 11:09:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:46.109 11:09:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:46.109 11:09:42 -- common/autotest_common.sh@10 -- # set +x 00:23:46.109 ************************************ 00:23:46.109 START TEST nvmf_discovery_remove_ifc 00:23:46.109 ************************************ 00:23:46.109 11:09:42 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:46.109 * Looking for test storage... 00:23:46.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.109 11:09:42 -- nvmf/common.sh@7 -- # uname -s 00:23:46.109 11:09:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.109 11:09:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.109 11:09:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.109 11:09:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.109 11:09:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.109 11:09:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.109 11:09:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.109 11:09:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.109 11:09:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.109 11:09:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.109 11:09:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.109 11:09:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.109 11:09:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.109 11:09:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.109 11:09:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.109 11:09:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.109 11:09:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.109 11:09:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.109 11:09:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.109 11:09:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.109 11:09:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.109 11:09:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.109 11:09:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.109 11:09:42 -- paths/export.sh@5 -- # export PATH 00:23:46.109 11:09:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.109 11:09:42 -- nvmf/common.sh@47 -- # : 0 00:23:46.109 11:09:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.109 11:09:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.109 11:09:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.109 11:09:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.109 11:09:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.109 11:09:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.109 11:09:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.109 11:09:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:46.109 11:09:42 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:46.109 11:09:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:46.109 11:09:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.109 11:09:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:46.109 11:09:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:46.109 11:09:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:46.109 11:09:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.109 11:09:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.110 11:09:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.110 11:09:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:46.110 11:09:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:46.110 11:09:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.110 11:09:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 11:09:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:52.711 11:09:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.711 11:09:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.711 11:09:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.711 11:09:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.711 11:09:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.711 11:09:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.711 11:09:49 -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.711 11:09:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.712 11:09:49 -- nvmf/common.sh@296 -- # e810=() 00:23:52.712 11:09:49 -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.712 11:09:49 -- nvmf/common.sh@297 -- # x722=() 00:23:52.712 11:09:49 -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.712 11:09:49 -- nvmf/common.sh@298 -- # mlx=() 00:23:52.712 11:09:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.712 11:09:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.712 11:09:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.712 11:09:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.712 11:09:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.712 11:09:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.712 11:09:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:52.712 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:52.712 11:09:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.712 11:09:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:52.712 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:52.712 11:09:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.712 11:09:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.712 11:09:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.712 11:09:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:52.712 11:09:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.712 11:09:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:52.712 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:52.712 11:09:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.712 11:09:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.712 11:09:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.712 11:09:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:52.712 11:09:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.712 11:09:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:52.712 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:52.712 11:09:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.712 11:09:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:52.712 11:09:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:52.712 11:09:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:52.712 11:09:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:52.712 11:09:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.712 11:09:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.712 11:09:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.712 11:09:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.712 11:09:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.712 11:09:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.712 11:09:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.712 11:09:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.712 11:09:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.712 11:09:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.712 11:09:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.712 11:09:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.712 11:09:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.712 11:09:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.712 11:09:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.712 11:09:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.712 11:09:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.712 11:09:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.712 11:09:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.712 11:09:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:23:52.712 00:23:52.712 --- 10.0.0.2 ping statistics --- 00:23:52.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.712 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:23:52.712 11:09:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:23:52.973 00:23:52.973 --- 10.0.0.1 ping statistics --- 00:23:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.973 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:52.973 11:09:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.973 11:09:49 -- nvmf/common.sh@411 -- # return 0 00:23:52.973 11:09:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:52.973 11:09:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.973 11:09:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:52.973 11:09:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:52.973 11:09:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.973 11:09:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:52.973 11:09:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:52.973 11:09:49 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:52.973 11:09:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:52.973 11:09:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:52.973 11:09:49 -- common/autotest_common.sh@10 -- # set +x 00:23:52.973 11:09:49 -- nvmf/common.sh@470 -- # nvmfpid=457848 00:23:52.973 11:09:49 -- nvmf/common.sh@471 -- # waitforlisten 457848 00:23:52.973 11:09:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:52.973 11:09:49 -- common/autotest_common.sh@827 -- # '[' -z 457848 ']' 00:23:52.973 11:09:49 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.973 11:09:49 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:52.973 11:09:49 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.973 11:09:49 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:52.973 11:09:49 -- common/autotest_common.sh@10 -- # set +x 00:23:52.973 [2024-05-15 11:09:49.470938] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:23:52.973 [2024-05-15 11:09:49.471002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.973 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.973 [2024-05-15 11:09:49.556820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.234 [2024-05-15 11:09:49.650377] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.234 [2024-05-15 11:09:49.650425] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.234 [2024-05-15 11:09:49.650434] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.234 [2024-05-15 11:09:49.650442] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.234 [2024-05-15 11:09:49.650448] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.234 [2024-05-15 11:09:49.650478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.806 11:09:50 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.806 11:09:50 -- common/autotest_common.sh@860 -- # return 0 00:23:53.806 11:09:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:53.806 11:09:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.806 11:09:50 -- common/autotest_common.sh@10 -- # set +x 00:23:53.806 11:09:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.806 11:09:50 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:53.806 11:09:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.806 11:09:50 -- common/autotest_common.sh@10 -- # set +x 00:23:53.806 [2024-05-15 11:09:50.315639] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.806 [2024-05-15 11:09:50.323605] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:53.806 [2024-05-15 11:09:50.323866] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:53.806 null0 00:23:53.806 [2024-05-15 11:09:50.355823] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.806 11:09:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.806 11:09:50 -- host/discovery_remove_ifc.sh@59 -- # hostpid=458192 00:23:53.806 11:09:50 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 458192 /tmp/host.sock 00:23:53.806 11:09:50 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:53.806 11:09:50 -- common/autotest_common.sh@827 -- # '[' -z 458192 ']' 00:23:53.806 11:09:50 -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:23:53.806 11:09:50 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:53.806 11:09:50 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:53.806 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:53.806 11:09:50 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:53.807 11:09:50 -- common/autotest_common.sh@10 -- # set +x 00:23:53.807 [2024-05-15 11:09:50.427316] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:23:53.807 [2024-05-15 11:09:50.427376] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458192 ] 00:23:53.807 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.067 [2024-05-15 11:09:50.490011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.067 [2024-05-15 11:09:50.560759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.638 11:09:51 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:54.638 11:09:51 -- common/autotest_common.sh@860 -- # return 0 00:23:54.638 11:09:51 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.638 11:09:51 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:54.638 11:09:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.638 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:23:54.638 11:09:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.638 11:09:51 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:54.638 11:09:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.638 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:23:54.638 11:09:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.638 11:09:51 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:54.638 11:09:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.638 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:23:56.023 [2024-05-15 11:09:52.323707] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:56.023 [2024-05-15 11:09:52.323731] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:56.023 [2024-05-15 11:09:52.323744] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.023 [2024-05-15 11:09:52.412027] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:56.023 [2024-05-15 11:09:52.516375] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:56.023 [2024-05-15 11:09:52.516420] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:56.023 [2024-05-15 11:09:52.516441] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:56.023 [2024-05-15 11:09:52.516455] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:56.023 [2024-05-15 11:09:52.516475] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:56.023 11:09:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:56.023 [2024-05-15 11:09:52.523280] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xae8280 was disconnected and freed. delete nvme_qpair. 00:23:56.023 11:09:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:56.023 11:09:52 -- common/autotest_common.sh@10 -- # set +x 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:56.023 11:09:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:56.023 11:09:52 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:56.283 11:09:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:56.283 11:09:52 -- common/autotest_common.sh@10 -- # set +x 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:56.283 11:09:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:56.283 11:09:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:57.224 11:09:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.224 11:09:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.224 11:09:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.224 11:09:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.224 11:09:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.224 11:09:53 -- common/autotest_common.sh@10 -- # set +x 00:23:57.224 11:09:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.224 11:09:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.224 11:09:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:57.224 11:09:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:58.165 11:09:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:58.165 11:09:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.165 11:09:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:58.165 11:09:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.165 11:09:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:58.165 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:23:58.165 11:09:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:58.425 11:09:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.425 11:09:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:58.425 11:09:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:59.367 11:09:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:59.367 11:09:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.367 11:09:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:59.367 11:09:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.367 11:09:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:59.367 11:09:55 -- common/autotest_common.sh@10 -- # set +x 00:23:59.367 11:09:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:59.367 11:09:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.367 11:09:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:59.368 11:09:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:00.310 11:09:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:00.310 11:09:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.310 11:09:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:00.310 11:09:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.310 11:09:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:00.310 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:24:00.310 11:09:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:00.310 11:09:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.616 11:09:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:00.616 11:09:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:01.559 [2024-05-15 11:09:57.957030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:01.559 [2024-05-15 11:09:57.957069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.559 [2024-05-15 11:09:57.957080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.559 [2024-05-15 11:09:57.957090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.559 [2024-05-15 11:09:57.957097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.559 [2024-05-15 11:09:57.957106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.559 [2024-05-15 11:09:57.957113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.559 [2024-05-15 11:09:57.957121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.559 [2024-05-15 11:09:57.957128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.559 [2024-05-15 11:09:57.957136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.559 [2024-05-15 11:09:57.957143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.559 [2024-05-15 11:09:57.957150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf5f0 is same with the state(5) to be set 00:24:01.559 [2024-05-15 11:09:57.967052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaf5f0 (9): Bad file descriptor 00:24:01.559 11:09:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.559 [2024-05-15 11:09:57.977093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:01.559 11:09:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.559 11:09:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.559 11:09:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.559 11:09:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.559 11:09:57 -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 11:09:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:02.502 [2024-05-15 11:09:59.001613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:03.444 [2024-05-15 11:10:00.025577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:03.444 [2024-05-15 11:10:00.025616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaf5f0 with addr=10.0.0.2, port=4420 00:24:03.444 [2024-05-15 11:10:00.025629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaf5f0 is same with the state(5) to be set 00:24:03.444 [2024-05-15 11:10:00.026002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaf5f0 (9): Bad file descriptor 00:24:03.444 [2024-05-15 11:10:00.026025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.444 [2024-05-15 11:10:00.026046] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:03.444 [2024-05-15 11:10:00.026068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-05-15 11:10:00.026078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-05-15 11:10:00.026089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-05-15 11:10:00.026097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-05-15 11:10:00.026106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-05-15 11:10:00.026113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-05-15 11:10:00.026121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-05-15 11:10:00.026129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-05-15 11:10:00.026137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-05-15 11:10:00.026144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-05-15 11:10:00.026152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:03.444 [2024-05-15 11:10:00.026658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaea80 (9): Bad file descriptor 00:24:03.444 [2024-05-15 11:10:00.027669] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:03.444 [2024-05-15 11:10:00.027681] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:03.444 11:10:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.444 11:10:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:03.444 11:10:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.832 11:10:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.832 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.832 11:10:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.832 11:10:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.832 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.832 11:10:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:04.832 11:10:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:05.771 [2024-05-15 11:10:02.080635] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:05.771 [2024-05-15 11:10:02.080659] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:05.771 [2024-05-15 11:10:02.080673] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:05.771 [2024-05-15 11:10:02.168936] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:05.771 [2024-05-15 11:10:02.227581] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:05.771 [2024-05-15 11:10:02.227618] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:05.771 [2024-05-15 11:10:02.227639] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:05.771 [2024-05-15 11:10:02.227652] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:05.771 [2024-05-15 11:10:02.227660] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:05.771 [2024-05-15 11:10:02.237259] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xabc480 was disconnected and freed. delete nvme_qpair. 00:24:05.771 11:10:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:05.771 11:10:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.771 11:10:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:05.771 11:10:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.771 11:10:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:05.771 11:10:02 -- common/autotest_common.sh@10 -- # set +x 00:24:05.771 11:10:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:05.771 11:10:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.772 11:10:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:05.772 11:10:02 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:05.772 11:10:02 -- host/discovery_remove_ifc.sh@90 -- # killprocess 458192 00:24:05.772 11:10:02 -- common/autotest_common.sh@946 -- # '[' -z 458192 ']' 00:24:05.772 11:10:02 -- common/autotest_common.sh@950 -- # kill -0 458192 00:24:05.772 11:10:02 -- common/autotest_common.sh@951 -- # uname 00:24:05.772 11:10:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.772 11:10:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 458192 00:24:05.772 11:10:02 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:05.772 11:10:02 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:05.772 11:10:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 458192' 00:24:05.772 killing process with pid 458192 00:24:05.772 11:10:02 -- common/autotest_common.sh@965 -- # kill 458192 00:24:05.772 11:10:02 -- common/autotest_common.sh@970 -- # wait 458192 00:24:06.032 11:10:02 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:06.032 11:10:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:06.032 11:10:02 -- nvmf/common.sh@117 -- # sync 00:24:06.032 11:10:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.032 11:10:02 -- nvmf/common.sh@120 -- # set +e 00:24:06.032 11:10:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.032 11:10:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.032 rmmod nvme_tcp 00:24:06.032 rmmod nvme_fabrics 00:24:06.032 rmmod nvme_keyring 00:24:06.032 11:10:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.032 11:10:02 -- nvmf/common.sh@124 -- # set -e 00:24:06.032 11:10:02 -- nvmf/common.sh@125 -- # return 0 00:24:06.032 11:10:02 -- nvmf/common.sh@478 -- # '[' -n 457848 ']' 00:24:06.032 11:10:02 -- nvmf/common.sh@479 -- # killprocess 457848 00:24:06.032 11:10:02 -- common/autotest_common.sh@946 -- # '[' -z 457848 ']' 00:24:06.032 11:10:02 -- common/autotest_common.sh@950 -- # kill -0 457848 00:24:06.032 11:10:02 -- common/autotest_common.sh@951 -- # uname 00:24:06.033 11:10:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:06.033 11:10:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 457848 00:24:06.033 11:10:02 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:06.033 11:10:02 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:06.033 11:10:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 457848' 00:24:06.033 killing process with pid 457848 00:24:06.033 11:10:02 -- common/autotest_common.sh@965 -- # kill 457848 00:24:06.033 [2024-05-15 11:10:02.630365] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:06.033 11:10:02 -- common/autotest_common.sh@970 -- # wait 457848 00:24:06.293 11:10:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:06.294 11:10:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:06.294 11:10:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:06.294 11:10:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.294 11:10:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.294 11:10:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.294 11:10:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.294 11:10:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.208 11:10:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.208 00:24:08.208 real 0m22.555s 00:24:08.208 user 0m25.803s 00:24:08.208 sys 0m6.449s 00:24:08.208 11:10:04 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:08.209 11:10:04 -- common/autotest_common.sh@10 -- # set +x 00:24:08.209 ************************************ 00:24:08.209 END TEST nvmf_discovery_remove_ifc 00:24:08.209 ************************************ 00:24:08.209 11:10:04 -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:08.209 11:10:04 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:08.209 11:10:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:08.209 11:10:04 -- common/autotest_common.sh@10 -- # set +x 00:24:08.469 ************************************ 00:24:08.469 START TEST nvmf_identify_kernel_target 00:24:08.469 ************************************ 00:24:08.469 11:10:04 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:08.469 * Looking for test storage... 00:24:08.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.469 11:10:05 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.469 11:10:05 -- nvmf/common.sh@7 -- # uname -s 00:24:08.469 11:10:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.469 11:10:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.469 11:10:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.469 11:10:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.469 11:10:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.469 11:10:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.469 11:10:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.469 11:10:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.469 11:10:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.469 11:10:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.469 11:10:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.469 11:10:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.469 11:10:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.469 11:10:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.469 11:10:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.469 11:10:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.469 11:10:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.469 11:10:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.469 11:10:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.469 11:10:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.469 11:10:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.469 11:10:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.469 11:10:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.469 11:10:05 -- paths/export.sh@5 -- # export PATH 00:24:08.469 11:10:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.469 11:10:05 -- nvmf/common.sh@47 -- # : 0 00:24:08.469 11:10:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.469 11:10:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.469 11:10:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.469 11:10:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.469 11:10:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.469 11:10:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.469 11:10:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.469 11:10:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.469 11:10:05 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:08.469 11:10:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:08.469 11:10:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.469 11:10:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:08.469 11:10:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:08.469 11:10:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:08.469 11:10:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.469 11:10:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.469 11:10:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.469 11:10:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:08.469 11:10:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:08.469 11:10:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.469 11:10:05 -- common/autotest_common.sh@10 -- # set +x 00:24:16.607 11:10:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:16.607 11:10:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.607 11:10:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.607 11:10:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.607 11:10:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.607 11:10:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.607 11:10:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.607 11:10:11 -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.607 11:10:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.607 11:10:11 -- nvmf/common.sh@296 -- # e810=() 00:24:16.607 11:10:11 -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.607 11:10:11 -- nvmf/common.sh@297 -- # x722=() 00:24:16.607 11:10:11 -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.607 11:10:11 -- nvmf/common.sh@298 -- # mlx=() 00:24:16.607 11:10:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.607 11:10:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.607 11:10:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.607 11:10:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.607 11:10:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.607 11:10:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.607 11:10:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.607 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.607 11:10:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.607 11:10:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.607 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.607 11:10:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.607 11:10:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.607 11:10:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.607 11:10:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.607 11:10:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:16.607 11:10:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.607 11:10:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.607 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.607 11:10:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.607 11:10:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.607 11:10:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.607 11:10:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:16.607 11:10:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.607 11:10:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.608 11:10:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.608 11:10:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:16.608 11:10:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:16.608 11:10:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:16.608 11:10:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:16.608 11:10:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:16.608 11:10:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.608 11:10:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.608 11:10:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.608 11:10:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.608 11:10:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.608 11:10:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.608 11:10:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.608 11:10:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.608 11:10:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.608 11:10:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.608 11:10:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.608 11:10:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.608 11:10:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.608 11:10:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.608 11:10:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.608 11:10:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.608 11:10:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.608 11:10:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.608 11:10:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.608 11:10:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:24:16.608 00:24:16.608 --- 10.0.0.2 ping statistics --- 00:24:16.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.608 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:24:16.608 11:10:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:16.608 00:24:16.608 --- 10.0.0.1 ping statistics --- 00:24:16.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.608 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:16.608 11:10:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.608 11:10:12 -- nvmf/common.sh@411 -- # return 0 00:24:16.608 11:10:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:16.608 11:10:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.608 11:10:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:16.608 11:10:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:16.608 11:10:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.608 11:10:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:16.608 11:10:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:16.608 11:10:12 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:16.608 11:10:12 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:16.608 11:10:12 -- nvmf/common.sh@717 -- # local ip 00:24:16.608 11:10:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.608 11:10:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.608 11:10:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.608 11:10:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.608 11:10:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.608 11:10:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.608 11:10:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.608 11:10:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.608 11:10:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.608 11:10:12 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:16.608 11:10:12 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:16.608 11:10:12 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:16.608 11:10:12 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:16.608 11:10:12 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:16.608 11:10:12 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:16.608 11:10:12 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:16.608 11:10:12 -- nvmf/common.sh@628 -- # local block nvme 00:24:16.608 11:10:12 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:16.608 11:10:12 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:16.608 11:10:12 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:16.608 11:10:12 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:18.520 Waiting for block devices as requested 00:24:18.781 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:18.781 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:18.781 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:19.041 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:19.041 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:19.041 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:19.301 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:19.301 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:19.301 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:19.561 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:19.561 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:19.561 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:19.821 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:19.821 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:19.821 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:19.821 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:20.081 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:20.342 11:10:16 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:20.342 11:10:16 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:20.342 11:10:16 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:20.342 11:10:16 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:20.342 11:10:16 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:20.342 11:10:16 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:20.342 11:10:16 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:20.342 11:10:16 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:20.342 11:10:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:20.342 No valid GPT data, bailing 00:24:20.342 11:10:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:20.342 11:10:16 -- scripts/common.sh@391 -- # pt= 00:24:20.342 11:10:16 -- scripts/common.sh@392 -- # return 1 00:24:20.342 11:10:16 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:20.342 11:10:16 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:20.342 11:10:16 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:20.342 11:10:16 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:20.342 11:10:16 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:20.342 11:10:16 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:20.342 11:10:16 -- nvmf/common.sh@656 -- # echo 1 00:24:20.342 11:10:16 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:20.342 11:10:16 -- nvmf/common.sh@658 -- # echo 1 00:24:20.342 11:10:16 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:20.342 11:10:16 -- nvmf/common.sh@661 -- # echo tcp 00:24:20.342 11:10:16 -- nvmf/common.sh@662 -- # echo 4420 00:24:20.342 11:10:16 -- nvmf/common.sh@663 -- # echo ipv4 00:24:20.342 11:10:16 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:20.342 11:10:16 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:24:20.342 00:24:20.342 Discovery Log Number of Records 2, Generation counter 2 00:24:20.342 =====Discovery Log Entry 0====== 00:24:20.342 trtype: tcp 00:24:20.342 adrfam: ipv4 00:24:20.342 subtype: current discovery subsystem 00:24:20.342 treq: not specified, sq flow control disable supported 00:24:20.342 portid: 1 00:24:20.342 trsvcid: 4420 00:24:20.342 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:20.342 traddr: 10.0.0.1 00:24:20.342 eflags: none 00:24:20.342 sectype: none 00:24:20.342 =====Discovery Log Entry 1====== 00:24:20.342 trtype: tcp 00:24:20.342 adrfam: ipv4 00:24:20.342 subtype: nvme subsystem 00:24:20.342 treq: not specified, sq flow control disable supported 00:24:20.342 portid: 1 00:24:20.342 trsvcid: 4420 00:24:20.342 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:20.342 traddr: 10.0.0.1 00:24:20.342 eflags: none 00:24:20.342 sectype: none 00:24:20.342 11:10:16 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:20.342 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:20.342 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.604 ===================================================== 00:24:20.604 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:20.604 ===================================================== 00:24:20.604 Controller Capabilities/Features 00:24:20.604 ================================ 00:24:20.604 Vendor ID: 0000 00:24:20.604 Subsystem Vendor ID: 0000 00:24:20.604 Serial Number: d88e24a5c86e84a669b1 00:24:20.604 Model Number: Linux 00:24:20.604 Firmware Version: 6.7.0-68 00:24:20.604 Recommended Arb Burst: 0 00:24:20.604 IEEE OUI Identifier: 00 00 00 00:24:20.604 Multi-path I/O 00:24:20.604 May have multiple subsystem ports: No 00:24:20.604 May have multiple controllers: No 00:24:20.604 Associated with SR-IOV VF: No 00:24:20.604 Max Data Transfer Size: Unlimited 00:24:20.604 Max Number of Namespaces: 0 00:24:20.604 Max Number of I/O Queues: 1024 00:24:20.604 NVMe Specification Version (VS): 1.3 00:24:20.604 NVMe Specification Version (Identify): 1.3 00:24:20.604 Maximum Queue Entries: 1024 00:24:20.604 Contiguous Queues Required: No 00:24:20.604 Arbitration Mechanisms Supported 00:24:20.604 Weighted Round Robin: Not Supported 00:24:20.604 Vendor Specific: Not Supported 00:24:20.604 Reset Timeout: 7500 ms 00:24:20.604 Doorbell Stride: 4 bytes 00:24:20.604 NVM Subsystem Reset: Not Supported 00:24:20.604 Command Sets Supported 00:24:20.604 NVM Command Set: Supported 00:24:20.604 Boot Partition: Not Supported 00:24:20.604 Memory Page Size Minimum: 4096 bytes 00:24:20.604 Memory Page Size Maximum: 4096 bytes 00:24:20.604 Persistent Memory Region: Not Supported 00:24:20.604 Optional Asynchronous Events Supported 00:24:20.604 Namespace Attribute Notices: Not Supported 00:24:20.604 Firmware Activation Notices: Not Supported 00:24:20.604 ANA Change Notices: Not Supported 00:24:20.604 PLE Aggregate Log Change Notices: Not Supported 00:24:20.604 LBA Status Info Alert Notices: Not Supported 00:24:20.604 EGE Aggregate Log Change Notices: Not Supported 00:24:20.604 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.604 Zone Descriptor Change Notices: Not Supported 00:24:20.604 Discovery Log Change Notices: Supported 00:24:20.604 Controller Attributes 00:24:20.604 128-bit Host Identifier: Not Supported 00:24:20.604 Non-Operational Permissive Mode: Not Supported 00:24:20.604 NVM Sets: Not Supported 00:24:20.604 Read Recovery Levels: Not Supported 00:24:20.604 Endurance Groups: Not Supported 00:24:20.604 Predictable Latency Mode: Not Supported 00:24:20.604 Traffic Based Keep ALive: Not Supported 00:24:20.604 Namespace Granularity: Not Supported 00:24:20.604 SQ Associations: Not Supported 00:24:20.604 UUID List: Not Supported 00:24:20.604 Multi-Domain Subsystem: Not Supported 00:24:20.604 Fixed Capacity Management: Not Supported 00:24:20.604 Variable Capacity Management: Not Supported 00:24:20.604 Delete Endurance Group: Not Supported 00:24:20.604 Delete NVM Set: Not Supported 00:24:20.604 Extended LBA Formats Supported: Not Supported 00:24:20.604 Flexible Data Placement Supported: Not Supported 00:24:20.604 00:24:20.604 Controller Memory Buffer Support 00:24:20.604 ================================ 00:24:20.604 Supported: No 00:24:20.604 00:24:20.604 Persistent Memory Region Support 00:24:20.604 ================================ 00:24:20.604 Supported: No 00:24:20.604 00:24:20.604 Admin Command Set Attributes 00:24:20.604 ============================ 00:24:20.604 Security Send/Receive: Not Supported 00:24:20.604 Format NVM: Not Supported 00:24:20.604 Firmware Activate/Download: Not Supported 00:24:20.604 Namespace Management: Not Supported 00:24:20.604 Device Self-Test: Not Supported 00:24:20.604 Directives: Not Supported 00:24:20.604 NVMe-MI: Not Supported 00:24:20.604 Virtualization Management: Not Supported 00:24:20.604 Doorbell Buffer Config: Not Supported 00:24:20.604 Get LBA Status Capability: Not Supported 00:24:20.604 Command & Feature Lockdown Capability: Not Supported 00:24:20.604 Abort Command Limit: 1 00:24:20.604 Async Event Request Limit: 1 00:24:20.604 Number of Firmware Slots: N/A 00:24:20.604 Firmware Slot 1 Read-Only: N/A 00:24:20.604 Firmware Activation Without Reset: N/A 00:24:20.604 Multiple Update Detection Support: N/A 00:24:20.604 Firmware Update Granularity: No Information Provided 00:24:20.604 Per-Namespace SMART Log: No 00:24:20.604 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.604 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:20.604 Command Effects Log Page: Not Supported 00:24:20.604 Get Log Page Extended Data: Supported 00:24:20.604 Telemetry Log Pages: Not Supported 00:24:20.604 Persistent Event Log Pages: Not Supported 00:24:20.604 Supported Log Pages Log Page: May Support 00:24:20.604 Commands Supported & Effects Log Page: Not Supported 00:24:20.604 Feature Identifiers & Effects Log Page:May Support 00:24:20.604 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.604 Data Area 4 for Telemetry Log: Not Supported 00:24:20.604 Error Log Page Entries Supported: 1 00:24:20.604 Keep Alive: Not Supported 00:24:20.604 00:24:20.604 NVM Command Set Attributes 00:24:20.604 ========================== 00:24:20.604 Submission Queue Entry Size 00:24:20.604 Max: 1 00:24:20.604 Min: 1 00:24:20.604 Completion Queue Entry Size 00:24:20.604 Max: 1 00:24:20.604 Min: 1 00:24:20.604 Number of Namespaces: 0 00:24:20.604 Compare Command: Not Supported 00:24:20.604 Write Uncorrectable Command: Not Supported 00:24:20.604 Dataset Management Command: Not Supported 00:24:20.604 Write Zeroes Command: Not Supported 00:24:20.604 Set Features Save Field: Not Supported 00:24:20.604 Reservations: Not Supported 00:24:20.604 Timestamp: Not Supported 00:24:20.604 Copy: Not Supported 00:24:20.604 Volatile Write Cache: Not Present 00:24:20.604 Atomic Write Unit (Normal): 1 00:24:20.604 Atomic Write Unit (PFail): 1 00:24:20.604 Atomic Compare & Write Unit: 1 00:24:20.605 Fused Compare & Write: Not Supported 00:24:20.605 Scatter-Gather List 00:24:20.605 SGL Command Set: Supported 00:24:20.605 SGL Keyed: Not Supported 00:24:20.605 SGL Bit Bucket Descriptor: Not Supported 00:24:20.605 SGL Metadata Pointer: Not Supported 00:24:20.605 Oversized SGL: Not Supported 00:24:20.605 SGL Metadata Address: Not Supported 00:24:20.605 SGL Offset: Supported 00:24:20.605 Transport SGL Data Block: Not Supported 00:24:20.605 Replay Protected Memory Block: Not Supported 00:24:20.605 00:24:20.605 Firmware Slot Information 00:24:20.605 ========================= 00:24:20.605 Active slot: 0 00:24:20.605 00:24:20.605 00:24:20.605 Error Log 00:24:20.605 ========= 00:24:20.605 00:24:20.605 Active Namespaces 00:24:20.605 ================= 00:24:20.605 Discovery Log Page 00:24:20.605 ================== 00:24:20.605 Generation Counter: 2 00:24:20.605 Number of Records: 2 00:24:20.605 Record Format: 0 00:24:20.605 00:24:20.605 Discovery Log Entry 0 00:24:20.605 ---------------------- 00:24:20.605 Transport Type: 3 (TCP) 00:24:20.605 Address Family: 1 (IPv4) 00:24:20.605 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:20.605 Entry Flags: 00:24:20.605 Duplicate Returned Information: 0 00:24:20.605 Explicit Persistent Connection Support for Discovery: 0 00:24:20.605 Transport Requirements: 00:24:20.605 Secure Channel: Not Specified 00:24:20.605 Port ID: 1 (0x0001) 00:24:20.605 Controller ID: 65535 (0xffff) 00:24:20.605 Admin Max SQ Size: 32 00:24:20.605 Transport Service Identifier: 4420 00:24:20.605 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:20.605 Transport Address: 10.0.0.1 00:24:20.605 Discovery Log Entry 1 00:24:20.605 ---------------------- 00:24:20.605 Transport Type: 3 (TCP) 00:24:20.605 Address Family: 1 (IPv4) 00:24:20.605 Subsystem Type: 2 (NVM Subsystem) 00:24:20.605 Entry Flags: 00:24:20.605 Duplicate Returned Information: 0 00:24:20.605 Explicit Persistent Connection Support for Discovery: 0 00:24:20.605 Transport Requirements: 00:24:20.605 Secure Channel: Not Specified 00:24:20.605 Port ID: 1 (0x0001) 00:24:20.605 Controller ID: 65535 (0xffff) 00:24:20.605 Admin Max SQ Size: 32 00:24:20.605 Transport Service Identifier: 4420 00:24:20.605 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:20.605 Transport Address: 10.0.0.1 00:24:20.605 11:10:17 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:20.605 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.605 get_feature(0x01) failed 00:24:20.605 get_feature(0x02) failed 00:24:20.605 get_feature(0x04) failed 00:24:20.605 ===================================================== 00:24:20.605 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:20.605 ===================================================== 00:24:20.605 Controller Capabilities/Features 00:24:20.605 ================================ 00:24:20.605 Vendor ID: 0000 00:24:20.605 Subsystem Vendor ID: 0000 00:24:20.605 Serial Number: 7f360ec3f602f010210b 00:24:20.605 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:20.605 Firmware Version: 6.7.0-68 00:24:20.605 Recommended Arb Burst: 6 00:24:20.605 IEEE OUI Identifier: 00 00 00 00:24:20.605 Multi-path I/O 00:24:20.605 May have multiple subsystem ports: Yes 00:24:20.605 May have multiple controllers: Yes 00:24:20.605 Associated with SR-IOV VF: No 00:24:20.605 Max Data Transfer Size: Unlimited 00:24:20.605 Max Number of Namespaces: 1024 00:24:20.605 Max Number of I/O Queues: 128 00:24:20.605 NVMe Specification Version (VS): 1.3 00:24:20.605 NVMe Specification Version (Identify): 1.3 00:24:20.605 Maximum Queue Entries: 1024 00:24:20.605 Contiguous Queues Required: No 00:24:20.605 Arbitration Mechanisms Supported 00:24:20.605 Weighted Round Robin: Not Supported 00:24:20.605 Vendor Specific: Not Supported 00:24:20.605 Reset Timeout: 7500 ms 00:24:20.605 Doorbell Stride: 4 bytes 00:24:20.605 NVM Subsystem Reset: Not Supported 00:24:20.605 Command Sets Supported 00:24:20.605 NVM Command Set: Supported 00:24:20.605 Boot Partition: Not Supported 00:24:20.605 Memory Page Size Minimum: 4096 bytes 00:24:20.605 Memory Page Size Maximum: 4096 bytes 00:24:20.605 Persistent Memory Region: Not Supported 00:24:20.605 Optional Asynchronous Events Supported 00:24:20.605 Namespace Attribute Notices: Supported 00:24:20.605 Firmware Activation Notices: Not Supported 00:24:20.605 ANA Change Notices: Supported 00:24:20.605 PLE Aggregate Log Change Notices: Not Supported 00:24:20.605 LBA Status Info Alert Notices: Not Supported 00:24:20.605 EGE Aggregate Log Change Notices: Not Supported 00:24:20.605 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.605 Zone Descriptor Change Notices: Not Supported 00:24:20.605 Discovery Log Change Notices: Not Supported 00:24:20.605 Controller Attributes 00:24:20.605 128-bit Host Identifier: Supported 00:24:20.605 Non-Operational Permissive Mode: Not Supported 00:24:20.605 NVM Sets: Not Supported 00:24:20.605 Read Recovery Levels: Not Supported 00:24:20.605 Endurance Groups: Not Supported 00:24:20.605 Predictable Latency Mode: Not Supported 00:24:20.605 Traffic Based Keep ALive: Supported 00:24:20.605 Namespace Granularity: Not Supported 00:24:20.605 SQ Associations: Not Supported 00:24:20.605 UUID List: Not Supported 00:24:20.605 Multi-Domain Subsystem: Not Supported 00:24:20.605 Fixed Capacity Management: Not Supported 00:24:20.605 Variable Capacity Management: Not Supported 00:24:20.605 Delete Endurance Group: Not Supported 00:24:20.605 Delete NVM Set: Not Supported 00:24:20.605 Extended LBA Formats Supported: Not Supported 00:24:20.605 Flexible Data Placement Supported: Not Supported 00:24:20.605 00:24:20.605 Controller Memory Buffer Support 00:24:20.605 ================================ 00:24:20.605 Supported: No 00:24:20.605 00:24:20.605 Persistent Memory Region Support 00:24:20.605 ================================ 00:24:20.605 Supported: No 00:24:20.605 00:24:20.605 Admin Command Set Attributes 00:24:20.605 ============================ 00:24:20.605 Security Send/Receive: Not Supported 00:24:20.605 Format NVM: Not Supported 00:24:20.605 Firmware Activate/Download: Not Supported 00:24:20.605 Namespace Management: Not Supported 00:24:20.605 Device Self-Test: Not Supported 00:24:20.605 Directives: Not Supported 00:24:20.605 NVMe-MI: Not Supported 00:24:20.605 Virtualization Management: Not Supported 00:24:20.605 Doorbell Buffer Config: Not Supported 00:24:20.605 Get LBA Status Capability: Not Supported 00:24:20.605 Command & Feature Lockdown Capability: Not Supported 00:24:20.605 Abort Command Limit: 4 00:24:20.605 Async Event Request Limit: 4 00:24:20.605 Number of Firmware Slots: N/A 00:24:20.605 Firmware Slot 1 Read-Only: N/A 00:24:20.605 Firmware Activation Without Reset: N/A 00:24:20.605 Multiple Update Detection Support: N/A 00:24:20.605 Firmware Update Granularity: No Information Provided 00:24:20.605 Per-Namespace SMART Log: Yes 00:24:20.605 Asymmetric Namespace Access Log Page: Supported 00:24:20.605 ANA Transition Time : 10 sec 00:24:20.605 00:24:20.605 Asymmetric Namespace Access Capabilities 00:24:20.605 ANA Optimized State : Supported 00:24:20.605 ANA Non-Optimized State : Supported 00:24:20.605 ANA Inaccessible State : Supported 00:24:20.605 ANA Persistent Loss State : Supported 00:24:20.605 ANA Change State : Supported 00:24:20.605 ANAGRPID is not changed : No 00:24:20.605 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:20.605 00:24:20.605 ANA Group Identifier Maximum : 128 00:24:20.605 Number of ANA Group Identifiers : 128 00:24:20.605 Max Number of Allowed Namespaces : 1024 00:24:20.605 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:20.605 Command Effects Log Page: Supported 00:24:20.605 Get Log Page Extended Data: Supported 00:24:20.605 Telemetry Log Pages: Not Supported 00:24:20.605 Persistent Event Log Pages: Not Supported 00:24:20.605 Supported Log Pages Log Page: May Support 00:24:20.605 Commands Supported & Effects Log Page: Not Supported 00:24:20.605 Feature Identifiers & Effects Log Page:May Support 00:24:20.605 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.605 Data Area 4 for Telemetry Log: Not Supported 00:24:20.605 Error Log Page Entries Supported: 128 00:24:20.605 Keep Alive: Supported 00:24:20.605 Keep Alive Granularity: 1000 ms 00:24:20.605 00:24:20.605 NVM Command Set Attributes 00:24:20.605 ========================== 00:24:20.605 Submission Queue Entry Size 00:24:20.605 Max: 64 00:24:20.605 Min: 64 00:24:20.605 Completion Queue Entry Size 00:24:20.605 Max: 16 00:24:20.605 Min: 16 00:24:20.605 Number of Namespaces: 1024 00:24:20.605 Compare Command: Not Supported 00:24:20.605 Write Uncorrectable Command: Not Supported 00:24:20.605 Dataset Management Command: Supported 00:24:20.605 Write Zeroes Command: Supported 00:24:20.606 Set Features Save Field: Not Supported 00:24:20.606 Reservations: Not Supported 00:24:20.606 Timestamp: Not Supported 00:24:20.606 Copy: Not Supported 00:24:20.606 Volatile Write Cache: Present 00:24:20.606 Atomic Write Unit (Normal): 1 00:24:20.606 Atomic Write Unit (PFail): 1 00:24:20.606 Atomic Compare & Write Unit: 1 00:24:20.606 Fused Compare & Write: Not Supported 00:24:20.606 Scatter-Gather List 00:24:20.606 SGL Command Set: Supported 00:24:20.606 SGL Keyed: Not Supported 00:24:20.606 SGL Bit Bucket Descriptor: Not Supported 00:24:20.606 SGL Metadata Pointer: Not Supported 00:24:20.606 Oversized SGL: Not Supported 00:24:20.606 SGL Metadata Address: Not Supported 00:24:20.606 SGL Offset: Supported 00:24:20.606 Transport SGL Data Block: Not Supported 00:24:20.606 Replay Protected Memory Block: Not Supported 00:24:20.606 00:24:20.606 Firmware Slot Information 00:24:20.606 ========================= 00:24:20.606 Active slot: 0 00:24:20.606 00:24:20.606 Asymmetric Namespace Access 00:24:20.606 =========================== 00:24:20.606 Change Count : 0 00:24:20.606 Number of ANA Group Descriptors : 1 00:24:20.606 ANA Group Descriptor : 0 00:24:20.606 ANA Group ID : 1 00:24:20.606 Number of NSID Values : 1 00:24:20.606 Change Count : 0 00:24:20.606 ANA State : 1 00:24:20.606 Namespace Identifier : 1 00:24:20.606 00:24:20.606 Commands Supported and Effects 00:24:20.606 ============================== 00:24:20.606 Admin Commands 00:24:20.606 -------------- 00:24:20.606 Get Log Page (02h): Supported 00:24:20.606 Identify (06h): Supported 00:24:20.606 Abort (08h): Supported 00:24:20.606 Set Features (09h): Supported 00:24:20.606 Get Features (0Ah): Supported 00:24:20.606 Asynchronous Event Request (0Ch): Supported 00:24:20.606 Keep Alive (18h): Supported 00:24:20.606 I/O Commands 00:24:20.606 ------------ 00:24:20.606 Flush (00h): Supported 00:24:20.606 Write (01h): Supported LBA-Change 00:24:20.606 Read (02h): Supported 00:24:20.606 Write Zeroes (08h): Supported LBA-Change 00:24:20.606 Dataset Management (09h): Supported 00:24:20.606 00:24:20.606 Error Log 00:24:20.606 ========= 00:24:20.606 Entry: 0 00:24:20.606 Error Count: 0x3 00:24:20.606 Submission Queue Id: 0x0 00:24:20.606 Command Id: 0x5 00:24:20.606 Phase Bit: 0 00:24:20.606 Status Code: 0x2 00:24:20.606 Status Code Type: 0x0 00:24:20.606 Do Not Retry: 1 00:24:20.606 Error Location: 0x28 00:24:20.606 LBA: 0x0 00:24:20.606 Namespace: 0x0 00:24:20.606 Vendor Log Page: 0x0 00:24:20.606 ----------- 00:24:20.606 Entry: 1 00:24:20.606 Error Count: 0x2 00:24:20.606 Submission Queue Id: 0x0 00:24:20.606 Command Id: 0x5 00:24:20.606 Phase Bit: 0 00:24:20.606 Status Code: 0x2 00:24:20.606 Status Code Type: 0x0 00:24:20.606 Do Not Retry: 1 00:24:20.606 Error Location: 0x28 00:24:20.606 LBA: 0x0 00:24:20.606 Namespace: 0x0 00:24:20.606 Vendor Log Page: 0x0 00:24:20.606 ----------- 00:24:20.606 Entry: 2 00:24:20.606 Error Count: 0x1 00:24:20.606 Submission Queue Id: 0x0 00:24:20.606 Command Id: 0x4 00:24:20.606 Phase Bit: 0 00:24:20.606 Status Code: 0x2 00:24:20.606 Status Code Type: 0x0 00:24:20.606 Do Not Retry: 1 00:24:20.606 Error Location: 0x28 00:24:20.606 LBA: 0x0 00:24:20.606 Namespace: 0x0 00:24:20.606 Vendor Log Page: 0x0 00:24:20.606 00:24:20.606 Number of Queues 00:24:20.606 ================ 00:24:20.606 Number of I/O Submission Queues: 128 00:24:20.606 Number of I/O Completion Queues: 128 00:24:20.606 00:24:20.606 ZNS Specific Controller Data 00:24:20.606 ============================ 00:24:20.606 Zone Append Size Limit: 0 00:24:20.606 00:24:20.606 00:24:20.606 Active Namespaces 00:24:20.606 ================= 00:24:20.606 get_feature(0x05) failed 00:24:20.606 Namespace ID:1 00:24:20.606 Command Set Identifier: NVM (00h) 00:24:20.606 Deallocate: Supported 00:24:20.606 Deallocated/Unwritten Error: Not Supported 00:24:20.606 Deallocated Read Value: Unknown 00:24:20.606 Deallocate in Write Zeroes: Not Supported 00:24:20.606 Deallocated Guard Field: 0xFFFF 00:24:20.606 Flush: Supported 00:24:20.606 Reservation: Not Supported 00:24:20.606 Namespace Sharing Capabilities: Multiple Controllers 00:24:20.606 Size (in LBAs): 3750748848 (1788GiB) 00:24:20.606 Capacity (in LBAs): 3750748848 (1788GiB) 00:24:20.606 Utilization (in LBAs): 3750748848 (1788GiB) 00:24:20.606 UUID: be548d31-573c-4ec2-ae77-8de53b170b1a 00:24:20.606 Thin Provisioning: Not Supported 00:24:20.606 Per-NS Atomic Units: Yes 00:24:20.606 Atomic Write Unit (Normal): 8 00:24:20.606 Atomic Write Unit (PFail): 8 00:24:20.606 Preferred Write Granularity: 8 00:24:20.606 Atomic Compare & Write Unit: 8 00:24:20.606 Atomic Boundary Size (Normal): 0 00:24:20.606 Atomic Boundary Size (PFail): 0 00:24:20.606 Atomic Boundary Offset: 0 00:24:20.606 NGUID/EUI64 Never Reused: No 00:24:20.606 ANA group ID: 1 00:24:20.606 Namespace Write Protected: No 00:24:20.606 Number of LBA Formats: 1 00:24:20.606 Current LBA Format: LBA Format #00 00:24:20.606 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:20.606 00:24:20.606 11:10:17 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:20.606 11:10:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:20.606 11:10:17 -- nvmf/common.sh@117 -- # sync 00:24:20.606 11:10:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.606 11:10:17 -- nvmf/common.sh@120 -- # set +e 00:24:20.606 11:10:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.606 11:10:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.606 rmmod nvme_tcp 00:24:20.606 rmmod nvme_fabrics 00:24:20.606 11:10:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.606 11:10:17 -- nvmf/common.sh@124 -- # set -e 00:24:20.606 11:10:17 -- nvmf/common.sh@125 -- # return 0 00:24:20.606 11:10:17 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:20.606 11:10:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:20.606 11:10:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:20.606 11:10:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:20.606 11:10:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.606 11:10:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.606 11:10:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.606 11:10:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.606 11:10:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.149 11:10:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:23.149 11:10:19 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:23.149 11:10:19 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:23.149 11:10:19 -- nvmf/common.sh@675 -- # echo 0 00:24:23.149 11:10:19 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.149 11:10:19 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:23.149 11:10:19 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:23.149 11:10:19 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.149 11:10:19 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:23.149 11:10:19 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:23.149 11:10:19 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:25.693 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:25.693 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:25.693 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:25.693 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:25.953 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:27.864 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:28.125 00:24:28.125 real 0m19.675s 00:24:28.125 user 0m4.745s 00:24:28.125 sys 0m10.186s 00:24:28.125 11:10:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:28.125 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:24:28.125 ************************************ 00:24:28.125 END TEST nvmf_identify_kernel_target 00:24:28.125 ************************************ 00:24:28.125 11:10:24 -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:28.125 11:10:24 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:28.125 11:10:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:28.125 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:24:28.125 ************************************ 00:24:28.125 START TEST nvmf_auth 00:24:28.125 ************************************ 00:24:28.125 11:10:24 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:28.125 * Looking for test storage... 00:24:28.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.125 11:10:24 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.125 11:10:24 -- nvmf/common.sh@7 -- # uname -s 00:24:28.125 11:10:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.125 11:10:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.125 11:10:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.125 11:10:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.125 11:10:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.125 11:10:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.125 11:10:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.125 11:10:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.125 11:10:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.125 11:10:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.125 11:10:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.125 11:10:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.125 11:10:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.125 11:10:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.125 11:10:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.125 11:10:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.125 11:10:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.125 11:10:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.125 11:10:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.125 11:10:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.125 11:10:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.125 11:10:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.125 11:10:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.125 11:10:24 -- paths/export.sh@5 -- # export PATH 00:24:28.125 11:10:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.125 11:10:24 -- nvmf/common.sh@47 -- # : 0 00:24:28.125 11:10:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.125 11:10:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.125 11:10:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.125 11:10:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.125 11:10:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.125 11:10:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.125 11:10:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.125 11:10:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.125 11:10:24 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:28.125 11:10:24 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:28.125 11:10:24 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:28.125 11:10:24 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:28.125 11:10:24 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:28.125 11:10:24 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:28.125 11:10:24 -- host/auth.sh@21 -- # keys=() 00:24:28.125 11:10:24 -- host/auth.sh@21 -- # ckeys=() 00:24:28.125 11:10:24 -- host/auth.sh@81 -- # nvmftestinit 00:24:28.125 11:10:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:28.125 11:10:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.125 11:10:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:28.125 11:10:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:28.125 11:10:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:28.125 11:10:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.125 11:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.125 11:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.125 11:10:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:28.125 11:10:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:28.126 11:10:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.126 11:10:24 -- common/autotest_common.sh@10 -- # set +x 00:24:34.710 11:10:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.710 11:10:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.710 11:10:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.710 11:10:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.710 11:10:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.710 11:10:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.710 11:10:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.710 11:10:31 -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.710 11:10:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.710 11:10:31 -- nvmf/common.sh@296 -- # e810=() 00:24:34.710 11:10:31 -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.710 11:10:31 -- nvmf/common.sh@297 -- # x722=() 00:24:34.710 11:10:31 -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.710 11:10:31 -- nvmf/common.sh@298 -- # mlx=() 00:24:34.710 11:10:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.710 11:10:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.710 11:10:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.710 11:10:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.710 11:10:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.710 11:10:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.711 11:10:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.711 11:10:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.971 11:10:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.971 11:10:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.971 11:10:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.971 11:10:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.971 11:10:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.971 11:10:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.971 11:10:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.971 11:10:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.971 11:10:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:34.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:34.971 11:10:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.971 11:10:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:34.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:34.971 11:10:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.971 11:10:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.971 11:10:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.971 11:10:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.971 11:10:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.971 11:10:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:34.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:34.971 11:10:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.971 11:10:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.971 11:10:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.971 11:10:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.971 11:10:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.971 11:10:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:34.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:34.971 11:10:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.971 11:10:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:34.971 11:10:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:34.971 11:10:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:34.971 11:10:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:34.971 11:10:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.971 11:10:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.971 11:10:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.971 11:10:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.971 11:10:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.971 11:10:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.971 11:10:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.971 11:10:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.971 11:10:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.971 11:10:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.971 11:10:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.971 11:10:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.971 11:10:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.971 11:10:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.971 11:10:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.971 11:10:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.971 11:10:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.971 11:10:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.232 11:10:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.232 11:10:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:24:35.232 00:24:35.232 --- 10.0.0.2 ping statistics --- 00:24:35.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.232 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:24:35.232 11:10:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:24:35.232 00:24:35.232 --- 10.0.0.1 ping statistics --- 00:24:35.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.232 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:24:35.232 11:10:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.232 11:10:31 -- nvmf/common.sh@411 -- # return 0 00:24:35.232 11:10:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:35.232 11:10:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.232 11:10:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:35.232 11:10:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:35.232 11:10:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.232 11:10:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:35.232 11:10:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:35.232 11:10:31 -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:24:35.232 11:10:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:35.232 11:10:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:35.232 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:24:35.232 11:10:31 -- nvmf/common.sh@470 -- # nvmfpid=472127 00:24:35.232 11:10:31 -- nvmf/common.sh@471 -- # waitforlisten 472127 00:24:35.232 11:10:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:35.232 11:10:31 -- common/autotest_common.sh@827 -- # '[' -z 472127 ']' 00:24:35.232 11:10:31 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.232 11:10:31 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:35.232 11:10:31 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.232 11:10:31 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:35.232 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:24:36.176 11:10:32 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.176 11:10:32 -- common/autotest_common.sh@860 -- # return 0 00:24:36.176 11:10:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:36.176 11:10:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.176 11:10:32 -- common/autotest_common.sh@10 -- # set +x 00:24:36.176 11:10:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.176 11:10:32 -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:36.176 11:10:32 -- host/auth.sh@86 -- # gen_key null 32 00:24:36.176 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.176 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.176 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.176 11:10:32 -- host/auth.sh@58 -- # digest=null 00:24:36.176 11:10:32 -- host/auth.sh@58 -- # len=32 00:24:36.176 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:36.176 11:10:32 -- host/auth.sh@59 -- # key=1b3027c86f0a57b838a1f3c13e4bb6c5 00:24:36.176 11:10:32 -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:24:36.176 11:10:32 -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.sY8 00:24:36.176 11:10:32 -- host/auth.sh@61 -- # format_dhchap_key 1b3027c86f0a57b838a1f3c13e4bb6c5 0 00:24:36.176 11:10:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 1b3027c86f0a57b838a1f3c13e4bb6c5 0 00:24:36.176 11:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.176 11:10:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.176 11:10:32 -- nvmf/common.sh@693 -- # key=1b3027c86f0a57b838a1f3c13e4bb6c5 00:24:36.176 11:10:32 -- nvmf/common.sh@693 -- # digest=0 00:24:36.176 11:10:32 -- nvmf/common.sh@694 -- # python - 00:24:36.176 11:10:32 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.sY8 00:24:36.176 11:10:32 -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.sY8 00:24:36.176 11:10:32 -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.sY8 00:24:36.176 11:10:32 -- host/auth.sh@86 -- # gen_key sha512 64 00:24:36.176 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.176 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.176 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.176 11:10:32 -- host/auth.sh@58 -- # digest=sha512 00:24:36.176 11:10:32 -- host/auth.sh@58 -- # len=64 00:24:36.176 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:36.176 11:10:32 -- host/auth.sh@59 -- # key=3cb9645e0a9f7f014fe2c2cf816b288b2c8c6cfa9d760df0680aed0ffad910c7 00:24:36.176 11:10:32 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:24:36.176 11:10:32 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.u5s 00:24:36.176 11:10:32 -- host/auth.sh@61 -- # format_dhchap_key 3cb9645e0a9f7f014fe2c2cf816b288b2c8c6cfa9d760df0680aed0ffad910c7 3 00:24:36.177 11:10:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 3cb9645e0a9f7f014fe2c2cf816b288b2c8c6cfa9d760df0680aed0ffad910c7 3 00:24:36.177 11:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # key=3cb9645e0a9f7f014fe2c2cf816b288b2c8c6cfa9d760df0680aed0ffad910c7 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # digest=3 00:24:36.177 11:10:32 -- nvmf/common.sh@694 -- # python - 00:24:36.177 11:10:32 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.u5s 00:24:36.177 11:10:32 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.u5s 00:24:36.177 11:10:32 -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.u5s 00:24:36.177 11:10:32 -- host/auth.sh@87 -- # gen_key null 48 00:24:36.177 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.177 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.177 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.177 11:10:32 -- host/auth.sh@58 -- # digest=null 00:24:36.177 11:10:32 -- host/auth.sh@58 -- # len=48 00:24:36.177 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:36.177 11:10:32 -- host/auth.sh@59 -- # key=32dd85849f01be637c6c4feb5f63f8697f2d64500a5c6e31 00:24:36.177 11:10:32 -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:24:36.177 11:10:32 -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.QAI 00:24:36.177 11:10:32 -- host/auth.sh@61 -- # format_dhchap_key 32dd85849f01be637c6c4feb5f63f8697f2d64500a5c6e31 0 00:24:36.177 11:10:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 32dd85849f01be637c6c4feb5f63f8697f2d64500a5c6e31 0 00:24:36.177 11:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # key=32dd85849f01be637c6c4feb5f63f8697f2d64500a5c6e31 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # digest=0 00:24:36.177 11:10:32 -- nvmf/common.sh@694 -- # python - 00:24:36.177 11:10:32 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.QAI 00:24:36.177 11:10:32 -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.QAI 00:24:36.177 11:10:32 -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.QAI 00:24:36.177 11:10:32 -- host/auth.sh@87 -- # gen_key sha384 48 00:24:36.177 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.177 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.177 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.177 11:10:32 -- host/auth.sh@58 -- # digest=sha384 00:24:36.177 11:10:32 -- host/auth.sh@58 -- # len=48 00:24:36.177 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:36.177 11:10:32 -- host/auth.sh@59 -- # key=2423c40c85cbac8b6e48a942d838d2df83e39b96f89c682f 00:24:36.177 11:10:32 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:24:36.177 11:10:32 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.gFT 00:24:36.177 11:10:32 -- host/auth.sh@61 -- # format_dhchap_key 2423c40c85cbac8b6e48a942d838d2df83e39b96f89c682f 2 00:24:36.177 11:10:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 2423c40c85cbac8b6e48a942d838d2df83e39b96f89c682f 2 00:24:36.177 11:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # key=2423c40c85cbac8b6e48a942d838d2df83e39b96f89c682f 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # digest=2 00:24:36.177 11:10:32 -- nvmf/common.sh@694 -- # python - 00:24:36.177 11:10:32 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.gFT 00:24:36.177 11:10:32 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.gFT 00:24:36.177 11:10:32 -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.gFT 00:24:36.177 11:10:32 -- host/auth.sh@88 -- # gen_key sha256 32 00:24:36.177 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.177 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.177 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.177 11:10:32 -- host/auth.sh@58 -- # digest=sha256 00:24:36.177 11:10:32 -- host/auth.sh@58 -- # len=32 00:24:36.177 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:36.177 11:10:32 -- host/auth.sh@59 -- # key=1b49fa27f01ab6e4d10f42ac19af1d48 00:24:36.177 11:10:32 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:24:36.177 11:10:32 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.k0W 00:24:36.177 11:10:32 -- host/auth.sh@61 -- # format_dhchap_key 1b49fa27f01ab6e4d10f42ac19af1d48 1 00:24:36.177 11:10:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 1b49fa27f01ab6e4d10f42ac19af1d48 1 00:24:36.177 11:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # key=1b49fa27f01ab6e4d10f42ac19af1d48 00:24:36.177 11:10:32 -- nvmf/common.sh@693 -- # digest=1 00:24:36.177 11:10:32 -- nvmf/common.sh@694 -- # python - 00:24:36.439 11:10:32 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.k0W 00:24:36.439 11:10:32 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.k0W 00:24:36.439 11:10:32 -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.k0W 00:24:36.439 11:10:32 -- host/auth.sh@88 -- # gen_key sha256 32 00:24:36.439 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.439 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.439 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.439 11:10:32 -- host/auth.sh@58 -- # digest=sha256 00:24:36.439 11:10:32 -- host/auth.sh@58 -- # len=32 00:24:36.439 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:36.439 11:10:32 -- host/auth.sh@59 -- # key=f89f026bd0419fc94857b197b236ac6f 00:24:36.439 11:10:32 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:24:36.439 11:10:32 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.qvK 00:24:36.439 11:10:32 -- host/auth.sh@61 -- # format_dhchap_key f89f026bd0419fc94857b197b236ac6f 1 00:24:36.439 11:10:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 f89f026bd0419fc94857b197b236ac6f 1 00:24:36.439 11:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.439 11:10:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.439 11:10:32 -- nvmf/common.sh@693 -- # key=f89f026bd0419fc94857b197b236ac6f 00:24:36.439 11:10:32 -- nvmf/common.sh@693 -- # digest=1 00:24:36.439 11:10:32 -- nvmf/common.sh@694 -- # python - 00:24:36.439 11:10:32 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.qvK 00:24:36.439 11:10:32 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.qvK 00:24:36.439 11:10:32 -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.qvK 00:24:36.439 11:10:32 -- host/auth.sh@89 -- # gen_key sha384 48 00:24:36.439 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.439 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.439 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.439 11:10:32 -- host/auth.sh@58 -- # digest=sha384 00:24:36.439 11:10:32 -- host/auth.sh@58 -- # len=48 00:24:36.439 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:36.439 11:10:32 -- host/auth.sh@59 -- # key=f0739c3b820f9e5d9b0a203ceda7a24bd900e14e602374c4 00:24:36.439 11:10:32 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:24:36.439 11:10:32 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.akI 00:24:36.439 11:10:32 -- host/auth.sh@61 -- # format_dhchap_key f0739c3b820f9e5d9b0a203ceda7a24bd900e14e602374c4 2 00:24:36.439 11:10:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 f0739c3b820f9e5d9b0a203ceda7a24bd900e14e602374c4 2 00:24:36.439 11:10:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.439 11:10:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.439 11:10:32 -- nvmf/common.sh@693 -- # key=f0739c3b820f9e5d9b0a203ceda7a24bd900e14e602374c4 00:24:36.439 11:10:32 -- nvmf/common.sh@693 -- # digest=2 00:24:36.439 11:10:32 -- nvmf/common.sh@694 -- # python - 00:24:36.439 11:10:32 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.akI 00:24:36.439 11:10:32 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.akI 00:24:36.439 11:10:32 -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.akI 00:24:36.439 11:10:32 -- host/auth.sh@89 -- # gen_key null 32 00:24:36.439 11:10:32 -- host/auth.sh@55 -- # local digest len file key 00:24:36.439 11:10:32 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.439 11:10:32 -- host/auth.sh@56 -- # local -A digests 00:24:36.439 11:10:32 -- host/auth.sh@58 -- # digest=null 00:24:36.439 11:10:32 -- host/auth.sh@58 -- # len=32 00:24:36.439 11:10:32 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:36.439 11:10:33 -- host/auth.sh@59 -- # key=173af7a770be2a38f48499fe7fa3b7df 00:24:36.439 11:10:33 -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:24:36.439 11:10:33 -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.S3S 00:24:36.439 11:10:33 -- host/auth.sh@61 -- # format_dhchap_key 173af7a770be2a38f48499fe7fa3b7df 0 00:24:36.439 11:10:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 173af7a770be2a38f48499fe7fa3b7df 0 00:24:36.440 11:10:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.440 11:10:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.440 11:10:33 -- nvmf/common.sh@693 -- # key=173af7a770be2a38f48499fe7fa3b7df 00:24:36.440 11:10:33 -- nvmf/common.sh@693 -- # digest=0 00:24:36.440 11:10:33 -- nvmf/common.sh@694 -- # python - 00:24:36.440 11:10:33 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.S3S 00:24:36.440 11:10:33 -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.S3S 00:24:36.440 11:10:33 -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.S3S 00:24:36.440 11:10:33 -- host/auth.sh@90 -- # gen_key sha512 64 00:24:36.440 11:10:33 -- host/auth.sh@55 -- # local digest len file key 00:24:36.440 11:10:33 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:36.440 11:10:33 -- host/auth.sh@56 -- # local -A digests 00:24:36.440 11:10:33 -- host/auth.sh@58 -- # digest=sha512 00:24:36.440 11:10:33 -- host/auth.sh@58 -- # len=64 00:24:36.440 11:10:33 -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:36.440 11:10:33 -- host/auth.sh@59 -- # key=242912202c996998686aba1fddcc936ac4df546a0221b4cab9f393e1f03b4885 00:24:36.440 11:10:33 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:24:36.440 11:10:33 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.fhi 00:24:36.440 11:10:33 -- host/auth.sh@61 -- # format_dhchap_key 242912202c996998686aba1fddcc936ac4df546a0221b4cab9f393e1f03b4885 3 00:24:36.440 11:10:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 242912202c996998686aba1fddcc936ac4df546a0221b4cab9f393e1f03b4885 3 00:24:36.440 11:10:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.440 11:10:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:36.440 11:10:33 -- nvmf/common.sh@693 -- # key=242912202c996998686aba1fddcc936ac4df546a0221b4cab9f393e1f03b4885 00:24:36.440 11:10:33 -- nvmf/common.sh@693 -- # digest=3 00:24:36.440 11:10:33 -- nvmf/common.sh@694 -- # python - 00:24:36.701 11:10:33 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.fhi 00:24:36.701 11:10:33 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.fhi 00:24:36.701 11:10:33 -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.fhi 00:24:36.701 11:10:33 -- host/auth.sh@90 -- # ckeys[4]= 00:24:36.701 11:10:33 -- host/auth.sh@92 -- # waitforlisten 472127 00:24:36.701 11:10:33 -- common/autotest_common.sh@827 -- # '[' -z 472127 ']' 00:24:36.701 11:10:33 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.701 11:10:33 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:36.701 11:10:33 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.701 11:10:33 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:36.701 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.701 11:10:33 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.701 11:10:33 -- common/autotest_common.sh@860 -- # return 0 00:24:36.701 11:10:33 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:36.701 11:10:33 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sY8 00:24:36.701 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.701 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.701 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.701 11:10:33 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.u5s ]] 00:24:36.701 11:10:33 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.u5s 00:24:36.701 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.701 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.701 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.701 11:10:33 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:36.701 11:10:33 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QAI 00:24:36.701 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.701 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.701 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.701 11:10:33 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.gFT ]] 00:24:36.701 11:10:33 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gFT 00:24:36.701 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.701 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.701 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.701 11:10:33 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:36.701 11:10:33 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.k0W 00:24:36.701 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.701 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.701 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.701 11:10:33 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.qvK ]] 00:24:36.701 11:10:33 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qvK 00:24:36.701 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.701 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.962 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.962 11:10:33 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:36.962 11:10:33 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.akI 00:24:36.962 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.962 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.962 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.962 11:10:33 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.S3S ]] 00:24:36.963 11:10:33 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.S3S 00:24:36.963 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.963 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.963 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.963 11:10:33 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:36.963 11:10:33 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fhi 00:24:36.963 11:10:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.963 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.963 11:10:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.963 11:10:33 -- host/auth.sh@95 -- # [[ -n '' ]] 00:24:36.963 11:10:33 -- host/auth.sh@98 -- # nvmet_auth_init 00:24:36.963 11:10:33 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:36.963 11:10:33 -- nvmf/common.sh@717 -- # local ip 00:24:36.963 11:10:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.963 11:10:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.963 11:10:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.963 11:10:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.963 11:10:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.963 11:10:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.963 11:10:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.963 11:10:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.963 11:10:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.963 11:10:33 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:36.963 11:10:33 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:36.963 11:10:33 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:36.963 11:10:33 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:36.963 11:10:33 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:36.963 11:10:33 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:36.963 11:10:33 -- nvmf/common.sh@628 -- # local block nvme 00:24:36.963 11:10:33 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:36.963 11:10:33 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:36.963 11:10:33 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:36.963 11:10:33 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:40.265 Waiting for block devices as requested 00:24:40.265 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:40.265 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:40.265 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:40.265 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:40.265 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:40.526 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:40.526 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:40.526 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:40.787 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:40.787 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:41.048 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:41.048 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:41.048 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:41.048 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:41.309 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:41.309 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:41.309 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:42.252 11:10:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:42.252 11:10:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:42.252 11:10:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:42.252 11:10:38 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:42.252 11:10:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:42.252 11:10:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:42.252 11:10:38 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:42.252 11:10:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:42.252 11:10:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:42.252 No valid GPT data, bailing 00:24:42.252 11:10:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:42.252 11:10:38 -- scripts/common.sh@391 -- # pt= 00:24:42.252 11:10:38 -- scripts/common.sh@392 -- # return 1 00:24:42.252 11:10:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:42.252 11:10:38 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:42.252 11:10:38 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:42.252 11:10:38 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:42.252 11:10:38 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:42.252 11:10:38 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:42.252 11:10:38 -- nvmf/common.sh@656 -- # echo 1 00:24:42.252 11:10:38 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:42.252 11:10:38 -- nvmf/common.sh@658 -- # echo 1 00:24:42.252 11:10:38 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:42.252 11:10:38 -- nvmf/common.sh@661 -- # echo tcp 00:24:42.252 11:10:38 -- nvmf/common.sh@662 -- # echo 4420 00:24:42.252 11:10:38 -- nvmf/common.sh@663 -- # echo ipv4 00:24:42.252 11:10:38 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:42.252 11:10:38 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:24:42.252 00:24:42.252 Discovery Log Number of Records 2, Generation counter 2 00:24:42.252 =====Discovery Log Entry 0====== 00:24:42.252 trtype: tcp 00:24:42.252 adrfam: ipv4 00:24:42.252 subtype: current discovery subsystem 00:24:42.252 treq: not specified, sq flow control disable supported 00:24:42.252 portid: 1 00:24:42.252 trsvcid: 4420 00:24:42.252 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:42.252 traddr: 10.0.0.1 00:24:42.252 eflags: none 00:24:42.252 sectype: none 00:24:42.252 =====Discovery Log Entry 1====== 00:24:42.252 trtype: tcp 00:24:42.252 adrfam: ipv4 00:24:42.252 subtype: nvme subsystem 00:24:42.252 treq: not specified, sq flow control disable supported 00:24:42.252 portid: 1 00:24:42.252 trsvcid: 4420 00:24:42.252 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:42.252 traddr: 10.0.0.1 00:24:42.252 eflags: none 00:24:42.252 sectype: none 00:24:42.252 11:10:38 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:42.252 11:10:38 -- host/auth.sh@37 -- # echo 0 00:24:42.252 11:10:38 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:42.252 11:10:38 -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:42.252 11:10:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.252 11:10:38 -- host/auth.sh@44 -- # digest=sha256 00:24:42.252 11:10:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.252 11:10:38 -- host/auth.sh@44 -- # keyid=1 00:24:42.252 11:10:38 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:42.252 11:10:38 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:42.252 11:10:38 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.512 11:10:38 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.512 11:10:39 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:42.512 11:10:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:24:42.512 11:10:39 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:42.512 11:10:39 -- host/auth.sh@106 -- # IFS=, 00:24:42.512 11:10:39 -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:24:42.512 11:10:39 -- host/auth.sh@106 -- # IFS=, 00:24:42.512 11:10:39 -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.512 11:10:39 -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:42.512 11:10:39 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:42.512 11:10:39 -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:24:42.512 11:10:39 -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.512 11:10:39 -- host/auth.sh@70 -- # keyid=1 00:24:42.512 11:10:39 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.512 11:10:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.512 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.512 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:42.512 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.512 11:10:39 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:42.512 11:10:39 -- nvmf/common.sh@717 -- # local ip 00:24:42.512 11:10:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.512 11:10:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.512 11:10:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.512 11:10:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.512 11:10:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.512 11:10:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.512 11:10:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.512 11:10:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.512 11:10:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.512 11:10:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.512 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.512 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:42.773 nvme0n1 00:24:42.773 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.773 11:10:39 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.773 11:10:39 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:42.773 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.773 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:42.773 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.773 11:10:39 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.773 11:10:39 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.773 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.773 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:42.773 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.773 11:10:39 -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:24:42.773 11:10:39 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.773 11:10:39 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:42.773 11:10:39 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:42.773 11:10:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.773 11:10:39 -- host/auth.sh@44 -- # digest=sha256 00:24:42.773 11:10:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.773 11:10:39 -- host/auth.sh@44 -- # keyid=0 00:24:42.773 11:10:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:42.773 11:10:39 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:42.773 11:10:39 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.773 11:10:39 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.773 11:10:39 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:42.773 11:10:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:24:42.773 11:10:39 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:42.773 11:10:39 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:24:42.773 11:10:39 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:42.773 11:10:39 -- host/auth.sh@70 -- # digest=sha256 00:24:42.773 11:10:39 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:42.773 11:10:39 -- host/auth.sh@70 -- # keyid=0 00:24:42.773 11:10:39 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.773 11:10:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.773 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.773 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:42.773 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.773 11:10:39 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:42.773 11:10:39 -- nvmf/common.sh@717 -- # local ip 00:24:42.773 11:10:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.773 11:10:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.773 11:10:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.773 11:10:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.773 11:10:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.773 11:10:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.773 11:10:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.773 11:10:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.773 11:10:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.774 11:10:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.774 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.774 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.035 nvme0n1 00:24:43.035 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.035 11:10:39 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.035 11:10:39 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.035 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.035 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.035 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.035 11:10:39 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.035 11:10:39 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.035 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.035 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.035 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.035 11:10:39 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.035 11:10:39 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:43.035 11:10:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.035 11:10:39 -- host/auth.sh@44 -- # digest=sha256 00:24:43.035 11:10:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.035 11:10:39 -- host/auth.sh@44 -- # keyid=1 00:24:43.035 11:10:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:43.035 11:10:39 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:43.035 11:10:39 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.035 11:10:39 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.035 11:10:39 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:43.035 11:10:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:24:43.035 11:10:39 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:43.035 11:10:39 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:24:43.035 11:10:39 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.035 11:10:39 -- host/auth.sh@70 -- # digest=sha256 00:24:43.035 11:10:39 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:43.035 11:10:39 -- host/auth.sh@70 -- # keyid=1 00:24:43.035 11:10:39 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.035 11:10:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.035 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.035 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.035 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.035 11:10:39 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.035 11:10:39 -- nvmf/common.sh@717 -- # local ip 00:24:43.035 11:10:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.035 11:10:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.035 11:10:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.036 11:10:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.036 11:10:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.036 11:10:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.036 11:10:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.036 11:10:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.036 11:10:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.036 11:10:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.036 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.036 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.036 nvme0n1 00:24:43.036 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.036 11:10:39 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.036 11:10:39 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.036 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.036 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.036 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.296 11:10:39 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.296 11:10:39 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.296 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.296 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.296 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.296 11:10:39 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.296 11:10:39 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:43.296 11:10:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.296 11:10:39 -- host/auth.sh@44 -- # digest=sha256 00:24:43.296 11:10:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.296 11:10:39 -- host/auth.sh@44 -- # keyid=2 00:24:43.296 11:10:39 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:43.296 11:10:39 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:43.296 11:10:39 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.296 11:10:39 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.296 11:10:39 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:43.296 11:10:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:24:43.296 11:10:39 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:43.296 11:10:39 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:24:43.296 11:10:39 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.296 11:10:39 -- host/auth.sh@70 -- # digest=sha256 00:24:43.296 11:10:39 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:43.296 11:10:39 -- host/auth.sh@70 -- # keyid=2 00:24:43.296 11:10:39 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.296 11:10:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.296 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.296 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.296 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.296 11:10:39 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.296 11:10:39 -- nvmf/common.sh@717 -- # local ip 00:24:43.296 11:10:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.296 11:10:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.296 11:10:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.296 11:10:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.296 11:10:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.296 11:10:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.296 11:10:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.296 11:10:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.296 11:10:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.296 11:10:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.296 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.296 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.296 nvme0n1 00:24:43.296 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.296 11:10:39 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.296 11:10:39 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.296 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.296 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.296 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.557 11:10:39 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.557 11:10:39 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.557 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.557 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.557 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.557 11:10:39 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.557 11:10:39 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:43.557 11:10:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.557 11:10:39 -- host/auth.sh@44 -- # digest=sha256 00:24:43.557 11:10:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.557 11:10:39 -- host/auth.sh@44 -- # keyid=3 00:24:43.557 11:10:39 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:43.557 11:10:39 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:43.557 11:10:39 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.558 11:10:39 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.558 11:10:39 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:43.558 11:10:39 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:24:43.558 11:10:39 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:43.558 11:10:39 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:24:43.558 11:10:39 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.558 11:10:39 -- host/auth.sh@70 -- # digest=sha256 00:24:43.558 11:10:39 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:43.558 11:10:39 -- host/auth.sh@70 -- # keyid=3 00:24:43.558 11:10:39 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.558 11:10:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.558 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.558 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.558 11:10:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.558 11:10:39 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.558 11:10:39 -- nvmf/common.sh@717 -- # local ip 00:24:43.558 11:10:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.558 11:10:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.558 11:10:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.558 11:10:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.558 11:10:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.558 11:10:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.558 11:10:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.558 11:10:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.558 11:10:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.558 11:10:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.558 11:10:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.558 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:43.558 nvme0n1 00:24:43.558 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.558 11:10:40 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.558 11:10:40 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.558 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.558 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.558 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.558 11:10:40 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.558 11:10:40 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.558 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.558 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.558 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.558 11:10:40 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.558 11:10:40 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:43.558 11:10:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.558 11:10:40 -- host/auth.sh@44 -- # digest=sha256 00:24:43.558 11:10:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.558 11:10:40 -- host/auth.sh@44 -- # keyid=4 00:24:43.558 11:10:40 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:43.558 11:10:40 -- host/auth.sh@46 -- # ckey= 00:24:43.820 11:10:40 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.820 11:10:40 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.820 11:10:40 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:43.820 11:10:40 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.820 11:10:40 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:24:43.820 11:10:40 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.820 11:10:40 -- host/auth.sh@70 -- # digest=sha256 00:24:43.820 11:10:40 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:43.820 11:10:40 -- host/auth.sh@70 -- # keyid=4 00:24:43.820 11:10:40 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.820 11:10:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.820 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.820 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.820 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.820 11:10:40 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.820 11:10:40 -- nvmf/common.sh@717 -- # local ip 00:24:43.820 11:10:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.820 11:10:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.820 11:10:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.820 11:10:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.820 11:10:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.820 11:10:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.820 11:10:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.820 11:10:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.820 11:10:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.820 11:10:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.820 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.820 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.820 nvme0n1 00:24:43.820 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.820 11:10:40 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.820 11:10:40 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.820 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.820 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.820 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.820 11:10:40 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.820 11:10:40 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.820 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.820 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:43.820 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.820 11:10:40 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.820 11:10:40 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.820 11:10:40 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:43.820 11:10:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.820 11:10:40 -- host/auth.sh@44 -- # digest=sha256 00:24:43.820 11:10:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.820 11:10:40 -- host/auth.sh@44 -- # keyid=0 00:24:43.820 11:10:40 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:43.820 11:10:40 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:43.820 11:10:40 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.820 11:10:40 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.082 11:10:40 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:44.082 11:10:40 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:24:44.082 11:10:40 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:44.082 11:10:40 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:24:44.082 11:10:40 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:44.082 11:10:40 -- host/auth.sh@70 -- # digest=sha256 00:24:44.082 11:10:40 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:44.082 11:10:40 -- host/auth.sh@70 -- # keyid=0 00:24:44.082 11:10:40 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.082 11:10:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.082 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.082 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:44.082 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.082 11:10:40 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:44.082 11:10:40 -- nvmf/common.sh@717 -- # local ip 00:24:44.082 11:10:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.082 11:10:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.082 11:10:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.082 11:10:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.082 11:10:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.082 11:10:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.082 11:10:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.082 11:10:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.082 11:10:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.082 11:10:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.082 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.082 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:44.342 nvme0n1 00:24:44.342 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.342 11:10:40 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.342 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.342 11:10:40 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:44.342 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:44.342 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.342 11:10:40 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.342 11:10:40 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.342 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.342 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:44.342 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.342 11:10:40 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:44.342 11:10:40 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:44.342 11:10:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.342 11:10:40 -- host/auth.sh@44 -- # digest=sha256 00:24:44.342 11:10:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.342 11:10:40 -- host/auth.sh@44 -- # keyid=1 00:24:44.342 11:10:40 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:44.342 11:10:40 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:44.342 11:10:40 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.342 11:10:40 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.342 11:10:40 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:44.342 11:10:40 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:24:44.342 11:10:40 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:44.342 11:10:40 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:24:44.342 11:10:40 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:44.342 11:10:40 -- host/auth.sh@70 -- # digest=sha256 00:24:44.342 11:10:40 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:44.342 11:10:40 -- host/auth.sh@70 -- # keyid=1 00:24:44.342 11:10:40 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.342 11:10:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.342 11:10:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.342 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:44.343 11:10:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.604 11:10:40 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:44.604 11:10:40 -- nvmf/common.sh@717 -- # local ip 00:24:44.604 11:10:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.604 11:10:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.604 11:10:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.604 11:10:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.604 11:10:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.604 11:10:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.604 11:10:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.604 11:10:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.604 11:10:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.604 11:10:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.604 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.604 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.604 nvme0n1 00:24:44.604 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.604 11:10:41 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.604 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.604 11:10:41 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:44.604 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.604 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.604 11:10:41 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.604 11:10:41 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.604 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.604 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.604 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.604 11:10:41 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:44.604 11:10:41 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:44.604 11:10:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.604 11:10:41 -- host/auth.sh@44 -- # digest=sha256 00:24:44.604 11:10:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.604 11:10:41 -- host/auth.sh@44 -- # keyid=2 00:24:44.604 11:10:41 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:44.604 11:10:41 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:44.604 11:10:41 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.604 11:10:41 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.604 11:10:41 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:44.604 11:10:41 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:24:44.604 11:10:41 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:44.604 11:10:41 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:24:44.604 11:10:41 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:44.604 11:10:41 -- host/auth.sh@70 -- # digest=sha256 00:24:44.604 11:10:41 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:44.604 11:10:41 -- host/auth.sh@70 -- # keyid=2 00:24:44.604 11:10:41 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.604 11:10:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.604 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.604 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.865 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.865 11:10:41 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:44.865 11:10:41 -- nvmf/common.sh@717 -- # local ip 00:24:44.865 11:10:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.865 11:10:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.865 11:10:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.865 11:10:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.865 11:10:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.865 11:10:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.865 11:10:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.865 11:10:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.865 11:10:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.865 11:10:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.865 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.865 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.865 nvme0n1 00:24:44.865 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.865 11:10:41 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.865 11:10:41 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:44.865 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.865 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:44.865 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.865 11:10:41 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.865 11:10:41 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.865 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.865 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.125 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.125 11:10:41 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:45.125 11:10:41 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:45.125 11:10:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.125 11:10:41 -- host/auth.sh@44 -- # digest=sha256 00:24:45.125 11:10:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.125 11:10:41 -- host/auth.sh@44 -- # keyid=3 00:24:45.125 11:10:41 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:45.125 11:10:41 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:45.125 11:10:41 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.125 11:10:41 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:45.126 11:10:41 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:45.126 11:10:41 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:24:45.126 11:10:41 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:45.126 11:10:41 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:24:45.126 11:10:41 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:45.126 11:10:41 -- host/auth.sh@70 -- # digest=sha256 00:24:45.126 11:10:41 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:45.126 11:10:41 -- host/auth.sh@70 -- # keyid=3 00:24:45.126 11:10:41 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.126 11:10:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:45.126 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.126 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.126 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.126 11:10:41 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:45.126 11:10:41 -- nvmf/common.sh@717 -- # local ip 00:24:45.126 11:10:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.126 11:10:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.126 11:10:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.126 11:10:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.126 11:10:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.126 11:10:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.126 11:10:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.126 11:10:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.126 11:10:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.126 11:10:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.126 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.126 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.126 nvme0n1 00:24:45.126 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.126 11:10:41 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.126 11:10:41 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:45.126 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.126 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.126 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.126 11:10:41 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.126 11:10:41 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.126 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.388 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.388 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.388 11:10:41 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:45.388 11:10:41 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:45.388 11:10:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.388 11:10:41 -- host/auth.sh@44 -- # digest=sha256 00:24:45.388 11:10:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.388 11:10:41 -- host/auth.sh@44 -- # keyid=4 00:24:45.388 11:10:41 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:45.388 11:10:41 -- host/auth.sh@46 -- # ckey= 00:24:45.388 11:10:41 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.388 11:10:41 -- host/auth.sh@49 -- # echo ffdhe3072 00:24:45.388 11:10:41 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:45.388 11:10:41 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:45.388 11:10:41 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:24:45.388 11:10:41 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:45.388 11:10:41 -- host/auth.sh@70 -- # digest=sha256 00:24:45.388 11:10:41 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:45.388 11:10:41 -- host/auth.sh@70 -- # keyid=4 00:24:45.388 11:10:41 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.388 11:10:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:45.388 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.388 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.388 11:10:41 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.388 11:10:41 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:45.388 11:10:41 -- nvmf/common.sh@717 -- # local ip 00:24:45.388 11:10:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.388 11:10:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.388 11:10:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.388 11:10:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.388 11:10:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.388 11:10:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.388 11:10:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.388 11:10:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.388 11:10:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.388 11:10:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.388 11:10:41 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.388 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.388 nvme0n1 00:24:45.388 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.388 11:10:42 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.388 11:10:42 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:45.388 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.388 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:45.388 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.648 11:10:42 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.649 11:10:42 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.649 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.649 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:45.649 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.649 11:10:42 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.649 11:10:42 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:45.649 11:10:42 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:45.649 11:10:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.649 11:10:42 -- host/auth.sh@44 -- # digest=sha256 00:24:45.649 11:10:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.649 11:10:42 -- host/auth.sh@44 -- # keyid=0 00:24:45.649 11:10:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:45.649 11:10:42 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:45.649 11:10:42 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.649 11:10:42 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.220 11:10:42 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:46.220 11:10:42 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:24:46.220 11:10:42 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:46.220 11:10:42 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:24:46.220 11:10:42 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:46.220 11:10:42 -- host/auth.sh@70 -- # digest=sha256 00:24:46.220 11:10:42 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:46.220 11:10:42 -- host/auth.sh@70 -- # keyid=0 00:24:46.220 11:10:42 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.220 11:10:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.220 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.220 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:46.220 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.220 11:10:42 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:46.220 11:10:42 -- nvmf/common.sh@717 -- # local ip 00:24:46.220 11:10:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.220 11:10:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.220 11:10:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.220 11:10:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.220 11:10:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.220 11:10:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.220 11:10:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.220 11:10:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.220 11:10:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.220 11:10:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.220 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.220 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:46.481 nvme0n1 00:24:46.481 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.481 11:10:42 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.481 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.481 11:10:42 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:46.481 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:46.481 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.481 11:10:42 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.481 11:10:42 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.481 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.481 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:46.481 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.481 11:10:42 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:46.481 11:10:42 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:46.481 11:10:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.481 11:10:42 -- host/auth.sh@44 -- # digest=sha256 00:24:46.481 11:10:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.481 11:10:42 -- host/auth.sh@44 -- # keyid=1 00:24:46.481 11:10:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:46.481 11:10:42 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:46.481 11:10:42 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.481 11:10:42 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.482 11:10:42 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:46.482 11:10:42 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:24:46.482 11:10:42 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:46.482 11:10:42 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:24:46.482 11:10:42 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:46.482 11:10:42 -- host/auth.sh@70 -- # digest=sha256 00:24:46.482 11:10:42 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:46.482 11:10:42 -- host/auth.sh@70 -- # keyid=1 00:24:46.482 11:10:42 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.482 11:10:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.482 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.482 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:46.482 11:10:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.482 11:10:42 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:46.482 11:10:42 -- nvmf/common.sh@717 -- # local ip 00:24:46.482 11:10:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.482 11:10:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.482 11:10:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.482 11:10:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.482 11:10:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.482 11:10:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.482 11:10:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.482 11:10:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.482 11:10:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.482 11:10:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.482 11:10:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.482 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:46.742 nvme0n1 00:24:46.742 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.742 11:10:43 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.742 11:10:43 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:46.742 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.742 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:46.742 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.742 11:10:43 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.742 11:10:43 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.742 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.742 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:46.742 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.742 11:10:43 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:46.742 11:10:43 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:46.742 11:10:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.742 11:10:43 -- host/auth.sh@44 -- # digest=sha256 00:24:46.742 11:10:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.742 11:10:43 -- host/auth.sh@44 -- # keyid=2 00:24:46.742 11:10:43 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:46.742 11:10:43 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:46.742 11:10:43 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.742 11:10:43 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.742 11:10:43 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:46.742 11:10:43 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:24:46.742 11:10:43 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:46.742 11:10:43 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:24:46.742 11:10:43 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:46.742 11:10:43 -- host/auth.sh@70 -- # digest=sha256 00:24:46.742 11:10:43 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:46.742 11:10:43 -- host/auth.sh@70 -- # keyid=2 00:24:46.742 11:10:43 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.742 11:10:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.742 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.742 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:46.742 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.742 11:10:43 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:46.742 11:10:43 -- nvmf/common.sh@717 -- # local ip 00:24:46.742 11:10:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.742 11:10:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.742 11:10:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.742 11:10:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.742 11:10:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.742 11:10:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.742 11:10:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.742 11:10:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.742 11:10:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.742 11:10:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.742 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.742 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:47.002 nvme0n1 00:24:47.002 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.002 11:10:43 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.002 11:10:43 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:47.002 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.002 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:47.002 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.263 11:10:43 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.263 11:10:43 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.263 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.263 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:47.263 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.263 11:10:43 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:47.263 11:10:43 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:47.263 11:10:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.263 11:10:43 -- host/auth.sh@44 -- # digest=sha256 00:24:47.263 11:10:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.263 11:10:43 -- host/auth.sh@44 -- # keyid=3 00:24:47.263 11:10:43 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:47.263 11:10:43 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:47.263 11:10:43 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.263 11:10:43 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:47.263 11:10:43 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:47.263 11:10:43 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:24:47.263 11:10:43 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:47.263 11:10:43 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:24:47.263 11:10:43 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:47.263 11:10:43 -- host/auth.sh@70 -- # digest=sha256 00:24:47.263 11:10:43 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:47.263 11:10:43 -- host/auth.sh@70 -- # keyid=3 00:24:47.263 11:10:43 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.263 11:10:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:47.263 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.263 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:47.263 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.263 11:10:43 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:47.263 11:10:43 -- nvmf/common.sh@717 -- # local ip 00:24:47.263 11:10:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.263 11:10:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.263 11:10:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.263 11:10:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.263 11:10:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.263 11:10:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.263 11:10:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.263 11:10:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.263 11:10:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.263 11:10:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.263 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.263 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:47.524 nvme0n1 00:24:47.524 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.524 11:10:43 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.524 11:10:43 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:47.524 11:10:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.524 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:24:47.524 11:10:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.524 11:10:44 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.524 11:10:44 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.524 11:10:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.524 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:24:47.524 11:10:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.524 11:10:44 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:47.524 11:10:44 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:47.524 11:10:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.524 11:10:44 -- host/auth.sh@44 -- # digest=sha256 00:24:47.524 11:10:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.524 11:10:44 -- host/auth.sh@44 -- # keyid=4 00:24:47.524 11:10:44 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:47.524 11:10:44 -- host/auth.sh@46 -- # ckey= 00:24:47.524 11:10:44 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.524 11:10:44 -- host/auth.sh@49 -- # echo ffdhe4096 00:24:47.524 11:10:44 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:47.524 11:10:44 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.524 11:10:44 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:24:47.524 11:10:44 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:47.524 11:10:44 -- host/auth.sh@70 -- # digest=sha256 00:24:47.524 11:10:44 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:47.524 11:10:44 -- host/auth.sh@70 -- # keyid=4 00:24:47.524 11:10:44 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.524 11:10:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:47.524 11:10:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.524 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:24:47.524 11:10:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.524 11:10:44 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:47.524 11:10:44 -- nvmf/common.sh@717 -- # local ip 00:24:47.524 11:10:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.524 11:10:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.524 11:10:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.524 11:10:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.524 11:10:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.524 11:10:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.524 11:10:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.524 11:10:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.524 11:10:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.524 11:10:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.524 11:10:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.524 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:24:47.785 nvme0n1 00:24:47.785 11:10:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.785 11:10:44 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.785 11:10:44 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:47.785 11:10:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.785 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:24:47.785 11:10:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.785 11:10:44 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.785 11:10:44 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.785 11:10:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.785 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:24:47.785 11:10:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.785 11:10:44 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.785 11:10:44 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:47.785 11:10:44 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:47.785 11:10:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.785 11:10:44 -- host/auth.sh@44 -- # digest=sha256 00:24:47.785 11:10:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.785 11:10:44 -- host/auth.sh@44 -- # keyid=0 00:24:47.785 11:10:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:47.785 11:10:44 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:47.785 11:10:44 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.785 11:10:44 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.697 11:10:46 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:49.697 11:10:46 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:24:49.697 11:10:46 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:49.697 11:10:46 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:24:49.697 11:10:46 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:49.697 11:10:46 -- host/auth.sh@70 -- # digest=sha256 00:24:49.697 11:10:46 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:49.697 11:10:46 -- host/auth.sh@70 -- # keyid=0 00:24:49.697 11:10:46 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.697 11:10:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.697 11:10:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.697 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:24:49.697 11:10:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.697 11:10:46 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:49.697 11:10:46 -- nvmf/common.sh@717 -- # local ip 00:24:49.697 11:10:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.697 11:10:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.697 11:10:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.697 11:10:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.697 11:10:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.697 11:10:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.697 11:10:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.697 11:10:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.697 11:10:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.697 11:10:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.697 11:10:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.697 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 nvme0n1 00:24:49.957 11:10:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.957 11:10:46 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.957 11:10:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.957 11:10:46 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:49.957 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 11:10:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.218 11:10:46 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.218 11:10:46 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.218 11:10:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.218 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:24:50.218 11:10:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.218 11:10:46 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:50.218 11:10:46 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:50.218 11:10:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.218 11:10:46 -- host/auth.sh@44 -- # digest=sha256 00:24:50.218 11:10:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.218 11:10:46 -- host/auth.sh@44 -- # keyid=1 00:24:50.218 11:10:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:50.218 11:10:46 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:50.218 11:10:46 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.218 11:10:46 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.218 11:10:46 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:50.218 11:10:46 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:24:50.218 11:10:46 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:50.218 11:10:46 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:24:50.218 11:10:46 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:50.218 11:10:46 -- host/auth.sh@70 -- # digest=sha256 00:24:50.218 11:10:46 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:50.218 11:10:46 -- host/auth.sh@70 -- # keyid=1 00:24:50.218 11:10:46 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.218 11:10:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:50.218 11:10:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.218 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:24:50.218 11:10:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.218 11:10:46 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:50.218 11:10:46 -- nvmf/common.sh@717 -- # local ip 00:24:50.218 11:10:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.218 11:10:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.218 11:10:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.218 11:10:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.218 11:10:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.218 11:10:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.218 11:10:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.218 11:10:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.218 11:10:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.218 11:10:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.218 11:10:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.218 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:24:50.790 nvme0n1 00:24:50.790 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.790 11:10:47 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.790 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.790 11:10:47 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:50.790 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:50.790 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.790 11:10:47 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.790 11:10:47 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.790 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.790 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:50.790 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.790 11:10:47 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:50.790 11:10:47 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:50.790 11:10:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.790 11:10:47 -- host/auth.sh@44 -- # digest=sha256 00:24:50.790 11:10:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.790 11:10:47 -- host/auth.sh@44 -- # keyid=2 00:24:50.790 11:10:47 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:50.790 11:10:47 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:50.790 11:10:47 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.790 11:10:47 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.790 11:10:47 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:50.790 11:10:47 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:24:50.790 11:10:47 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:50.790 11:10:47 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:24:50.790 11:10:47 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:50.790 11:10:47 -- host/auth.sh@70 -- # digest=sha256 00:24:50.790 11:10:47 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:50.790 11:10:47 -- host/auth.sh@70 -- # keyid=2 00:24:50.790 11:10:47 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.790 11:10:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:50.790 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.790 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:50.790 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.790 11:10:47 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:50.790 11:10:47 -- nvmf/common.sh@717 -- # local ip 00:24:50.790 11:10:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.790 11:10:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.790 11:10:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.790 11:10:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.790 11:10:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.790 11:10:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.790 11:10:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.790 11:10:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.790 11:10:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.790 11:10:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.790 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.790 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:51.050 nvme0n1 00:24:51.050 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.050 11:10:47 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.050 11:10:47 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:51.050 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.050 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:51.050 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.311 11:10:47 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.311 11:10:47 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.311 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.311 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:51.311 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.311 11:10:47 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:51.311 11:10:47 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:51.311 11:10:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.311 11:10:47 -- host/auth.sh@44 -- # digest=sha256 00:24:51.311 11:10:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.311 11:10:47 -- host/auth.sh@44 -- # keyid=3 00:24:51.311 11:10:47 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:51.311 11:10:47 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:51.311 11:10:47 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.311 11:10:47 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.311 11:10:47 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:51.311 11:10:47 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:24:51.311 11:10:47 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:51.311 11:10:47 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:24:51.311 11:10:47 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:51.311 11:10:47 -- host/auth.sh@70 -- # digest=sha256 00:24:51.311 11:10:47 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:51.311 11:10:47 -- host/auth.sh@70 -- # keyid=3 00:24:51.311 11:10:47 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.311 11:10:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:51.311 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.311 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:51.311 11:10:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.311 11:10:47 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:51.311 11:10:47 -- nvmf/common.sh@717 -- # local ip 00:24:51.311 11:10:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.311 11:10:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.311 11:10:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.311 11:10:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.311 11:10:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.311 11:10:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.311 11:10:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.311 11:10:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.311 11:10:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.311 11:10:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.311 11:10:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.311 11:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:51.882 nvme0n1 00:24:51.882 11:10:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.882 11:10:48 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.882 11:10:48 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:51.882 11:10:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.882 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:51.882 11:10:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.882 11:10:48 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.882 11:10:48 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.882 11:10:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.882 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:51.882 11:10:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.882 11:10:48 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:51.882 11:10:48 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:51.882 11:10:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.882 11:10:48 -- host/auth.sh@44 -- # digest=sha256 00:24:51.882 11:10:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.882 11:10:48 -- host/auth.sh@44 -- # keyid=4 00:24:51.882 11:10:48 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:51.882 11:10:48 -- host/auth.sh@46 -- # ckey= 00:24:51.882 11:10:48 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.882 11:10:48 -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.882 11:10:48 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:51.882 11:10:48 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.882 11:10:48 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:24:51.882 11:10:48 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:51.882 11:10:48 -- host/auth.sh@70 -- # digest=sha256 00:24:51.882 11:10:48 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:51.882 11:10:48 -- host/auth.sh@70 -- # keyid=4 00:24:51.882 11:10:48 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.882 11:10:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:51.882 11:10:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.882 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:51.882 11:10:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.882 11:10:48 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:51.882 11:10:48 -- nvmf/common.sh@717 -- # local ip 00:24:51.882 11:10:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.882 11:10:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.882 11:10:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.882 11:10:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.882 11:10:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.882 11:10:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.882 11:10:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.882 11:10:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.882 11:10:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.882 11:10:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.882 11:10:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.882 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:52.142 nvme0n1 00:24:52.142 11:10:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.142 11:10:48 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.142 11:10:48 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:52.142 11:10:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.142 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:52.142 11:10:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.403 11:10:48 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.403 11:10:48 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.403 11:10:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.403 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:52.403 11:10:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.403 11:10:48 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.403 11:10:48 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:52.403 11:10:48 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:52.403 11:10:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.403 11:10:48 -- host/auth.sh@44 -- # digest=sha256 00:24:52.403 11:10:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.403 11:10:48 -- host/auth.sh@44 -- # keyid=0 00:24:52.403 11:10:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:52.403 11:10:48 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:52.403 11:10:48 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.403 11:10:48 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.700 11:10:52 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:55.700 11:10:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:24:55.700 11:10:52 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:55.700 11:10:52 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:24:55.700 11:10:52 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:55.700 11:10:52 -- host/auth.sh@70 -- # digest=sha256 00:24:55.700 11:10:52 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:55.700 11:10:52 -- host/auth.sh@70 -- # keyid=0 00:24:55.700 11:10:52 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.700 11:10:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.700 11:10:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.700 11:10:52 -- common/autotest_common.sh@10 -- # set +x 00:24:55.700 11:10:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.700 11:10:52 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:55.700 11:10:52 -- nvmf/common.sh@717 -- # local ip 00:24:55.700 11:10:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:55.700 11:10:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:55.700 11:10:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.700 11:10:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.700 11:10:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:55.700 11:10:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.700 11:10:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:55.700 11:10:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:55.700 11:10:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:55.700 11:10:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.700 11:10:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.700 11:10:52 -- common/autotest_common.sh@10 -- # set +x 00:24:56.274 nvme0n1 00:24:56.274 11:10:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.274 11:10:52 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.274 11:10:52 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:56.274 11:10:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.274 11:10:52 -- common/autotest_common.sh@10 -- # set +x 00:24:56.274 11:10:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.274 11:10:52 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.274 11:10:52 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.274 11:10:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.274 11:10:52 -- common/autotest_common.sh@10 -- # set +x 00:24:56.274 11:10:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.274 11:10:52 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:56.274 11:10:52 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:56.274 11:10:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.274 11:10:52 -- host/auth.sh@44 -- # digest=sha256 00:24:56.274 11:10:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.274 11:10:52 -- host/auth.sh@44 -- # keyid=1 00:24:56.274 11:10:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:56.274 11:10:52 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:56.274 11:10:52 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.274 11:10:52 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.274 11:10:52 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:56.274 11:10:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:24:56.274 11:10:52 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:56.274 11:10:52 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:24:56.274 11:10:52 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:56.274 11:10:52 -- host/auth.sh@70 -- # digest=sha256 00:24:56.274 11:10:52 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:56.274 11:10:52 -- host/auth.sh@70 -- # keyid=1 00:24:56.274 11:10:52 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.274 11:10:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:56.274 11:10:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.274 11:10:52 -- common/autotest_common.sh@10 -- # set +x 00:24:56.274 11:10:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.274 11:10:52 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:56.274 11:10:52 -- nvmf/common.sh@717 -- # local ip 00:24:56.274 11:10:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:56.274 11:10:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:56.274 11:10:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.274 11:10:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.274 11:10:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:56.274 11:10:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.274 11:10:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:56.274 11:10:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:56.274 11:10:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:56.274 11:10:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.274 11:10:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.274 11:10:52 -- common/autotest_common.sh@10 -- # set +x 00:24:57.216 nvme0n1 00:24:57.216 11:10:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.216 11:10:53 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.216 11:10:53 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:57.216 11:10:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.216 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.216 11:10:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.216 11:10:53 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.217 11:10:53 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.217 11:10:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.217 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.217 11:10:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.217 11:10:53 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:57.217 11:10:53 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:57.217 11:10:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.217 11:10:53 -- host/auth.sh@44 -- # digest=sha256 00:24:57.217 11:10:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.217 11:10:53 -- host/auth.sh@44 -- # keyid=2 00:24:57.217 11:10:53 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:57.217 11:10:53 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:57.217 11:10:53 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.217 11:10:53 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.217 11:10:53 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:24:57.217 11:10:53 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:24:57.217 11:10:53 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:24:57.217 11:10:53 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:24:57.217 11:10:53 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:57.217 11:10:53 -- host/auth.sh@70 -- # digest=sha256 00:24:57.217 11:10:53 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:57.217 11:10:53 -- host/auth.sh@70 -- # keyid=2 00:24:57.217 11:10:53 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.217 11:10:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.217 11:10:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.217 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.217 11:10:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.217 11:10:53 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:57.217 11:10:53 -- nvmf/common.sh@717 -- # local ip 00:24:57.217 11:10:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:57.217 11:10:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:57.217 11:10:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.217 11:10:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.217 11:10:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:57.217 11:10:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.217 11:10:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:57.217 11:10:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:57.217 11:10:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:57.217 11:10:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.217 11:10:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.217 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:24:57.790 nvme0n1 00:24:57.790 11:10:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.790 11:10:54 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.790 11:10:54 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:57.790 11:10:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.790 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:24:58.052 11:10:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.052 11:10:54 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.052 11:10:54 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.052 11:10:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.052 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:24:58.052 11:10:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.052 11:10:54 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:58.052 11:10:54 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:58.052 11:10:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.052 11:10:54 -- host/auth.sh@44 -- # digest=sha256 00:24:58.052 11:10:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.052 11:10:54 -- host/auth.sh@44 -- # keyid=3 00:24:58.052 11:10:54 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:58.052 11:10:54 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:58.052 11:10:54 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.052 11:10:54 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.052 11:10:54 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:24:58.052 11:10:54 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:24:58.052 11:10:54 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:24:58.052 11:10:54 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:24:58.052 11:10:54 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:58.052 11:10:54 -- host/auth.sh@70 -- # digest=sha256 00:24:58.053 11:10:54 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:58.053 11:10:54 -- host/auth.sh@70 -- # keyid=3 00:24:58.053 11:10:54 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.053 11:10:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.053 11:10:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.053 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:24:58.053 11:10:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.053 11:10:54 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:58.053 11:10:54 -- nvmf/common.sh@717 -- # local ip 00:24:58.053 11:10:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:58.053 11:10:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:58.053 11:10:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.053 11:10:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.053 11:10:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:58.053 11:10:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.053 11:10:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:58.053 11:10:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:58.053 11:10:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:58.053 11:10:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.053 11:10:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.053 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:24:58.624 nvme0n1 00:24:58.624 11:10:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.624 11:10:55 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.625 11:10:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.625 11:10:55 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:58.625 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:24:58.886 11:10:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.886 11:10:55 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.886 11:10:55 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.886 11:10:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.886 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:24:58.886 11:10:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.886 11:10:55 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:58.886 11:10:55 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:58.886 11:10:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.886 11:10:55 -- host/auth.sh@44 -- # digest=sha256 00:24:58.886 11:10:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.886 11:10:55 -- host/auth.sh@44 -- # keyid=4 00:24:58.886 11:10:55 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:58.886 11:10:55 -- host/auth.sh@46 -- # ckey= 00:24:58.886 11:10:55 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.886 11:10:55 -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.886 11:10:55 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:24:58.886 11:10:55 -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.886 11:10:55 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:24:58.886 11:10:55 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:58.886 11:10:55 -- host/auth.sh@70 -- # digest=sha256 00:24:58.886 11:10:55 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:58.886 11:10:55 -- host/auth.sh@70 -- # keyid=4 00:24:58.886 11:10:55 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.886 11:10:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.886 11:10:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.886 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:24:58.886 11:10:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.886 11:10:55 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:58.886 11:10:55 -- nvmf/common.sh@717 -- # local ip 00:24:58.886 11:10:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:58.886 11:10:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:58.886 11:10:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.886 11:10:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.886 11:10:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:58.886 11:10:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.886 11:10:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:58.886 11:10:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:58.886 11:10:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:58.886 11:10:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.886 11:10:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.886 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:24:59.458 nvme0n1 00:24:59.458 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.458 11:10:56 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.458 11:10:56 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:59.458 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.458 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.719 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.719 11:10:56 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.719 11:10:56 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.719 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.719 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.719 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.719 11:10:56 -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:24:59.719 11:10:56 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.719 11:10:56 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:59.719 11:10:56 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:59.719 11:10:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.719 11:10:56 -- host/auth.sh@44 -- # digest=sha384 00:24:59.719 11:10:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.719 11:10:56 -- host/auth.sh@44 -- # keyid=0 00:24:59.719 11:10:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:59.719 11:10:56 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:59.719 11:10:56 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.719 11:10:56 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.719 11:10:56 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:24:59.719 11:10:56 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:24:59.719 11:10:56 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:24:59.719 11:10:56 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:24:59.719 11:10:56 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:59.719 11:10:56 -- host/auth.sh@70 -- # digest=sha384 00:24:59.719 11:10:56 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:59.719 11:10:56 -- host/auth.sh@70 -- # keyid=0 00:24:59.719 11:10:56 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.719 11:10:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.719 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.719 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.719 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.719 11:10:56 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:59.719 11:10:56 -- nvmf/common.sh@717 -- # local ip 00:24:59.719 11:10:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:59.719 11:10:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:59.719 11:10:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.719 11:10:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.719 11:10:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:59.719 11:10:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.719 11:10:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:59.719 11:10:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:59.719 11:10:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:59.719 11:10:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.719 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.719 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.719 nvme0n1 00:24:59.719 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.719 11:10:56 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.719 11:10:56 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:59.719 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.719 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.719 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.980 11:10:56 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.980 11:10:56 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.980 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.980 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.980 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.980 11:10:56 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:59.980 11:10:56 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:59.980 11:10:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.980 11:10:56 -- host/auth.sh@44 -- # digest=sha384 00:24:59.980 11:10:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.980 11:10:56 -- host/auth.sh@44 -- # keyid=1 00:24:59.980 11:10:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:59.980 11:10:56 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:59.980 11:10:56 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.980 11:10:56 -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.980 11:10:56 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:24:59.980 11:10:56 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:24:59.980 11:10:56 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:24:59.980 11:10:56 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:24:59.980 11:10:56 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:59.980 11:10:56 -- host/auth.sh@70 -- # digest=sha384 00:24:59.980 11:10:56 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:59.980 11:10:56 -- host/auth.sh@70 -- # keyid=1 00:24:59.980 11:10:56 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.980 11:10:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.980 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.980 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.980 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.980 11:10:56 -- host/auth.sh@74 -- # get_main_ns_ip 00:24:59.980 11:10:56 -- nvmf/common.sh@717 -- # local ip 00:24:59.980 11:10:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:59.980 11:10:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:59.980 11:10:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.980 11:10:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.980 11:10:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:59.980 11:10:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.980 11:10:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:59.980 11:10:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:59.980 11:10:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:59.980 11:10:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.980 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.980 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.980 nvme0n1 00:24:59.980 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.980 11:10:56 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.980 11:10:56 -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:59.980 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.980 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.980 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.980 11:10:56 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.980 11:10:56 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.980 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.980 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:59.980 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.241 11:10:56 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:00.241 11:10:56 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:00.241 11:10:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.241 11:10:56 -- host/auth.sh@44 -- # digest=sha384 00:25:00.241 11:10:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.241 11:10:56 -- host/auth.sh@44 -- # keyid=2 00:25:00.241 11:10:56 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:00.241 11:10:56 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:00.241 11:10:56 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.241 11:10:56 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.241 11:10:56 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:00.241 11:10:56 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:00.241 11:10:56 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:00.241 11:10:56 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:25:00.241 11:10:56 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:00.241 11:10:56 -- host/auth.sh@70 -- # digest=sha384 00:25:00.241 11:10:56 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:00.241 11:10:56 -- host/auth.sh@70 -- # keyid=2 00:25:00.241 11:10:56 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.241 11:10:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.241 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.241 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.241 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.241 11:10:56 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:00.241 11:10:56 -- nvmf/common.sh@717 -- # local ip 00:25:00.241 11:10:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.242 11:10:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.242 11:10:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.242 11:10:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.242 11:10:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.242 11:10:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.242 11:10:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.242 11:10:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.242 11:10:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.242 11:10:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.242 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.242 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.242 nvme0n1 00:25:00.242 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.242 11:10:56 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.242 11:10:56 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:00.242 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.242 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.242 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.242 11:10:56 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.242 11:10:56 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.242 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.242 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.242 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.242 11:10:56 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:00.242 11:10:56 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:00.242 11:10:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.242 11:10:56 -- host/auth.sh@44 -- # digest=sha384 00:25:00.242 11:10:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.242 11:10:56 -- host/auth.sh@44 -- # keyid=3 00:25:00.242 11:10:56 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:00.242 11:10:56 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:00.242 11:10:56 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.242 11:10:56 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.242 11:10:56 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:00.242 11:10:56 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:00.242 11:10:56 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:00.242 11:10:56 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:25:00.242 11:10:56 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:00.242 11:10:56 -- host/auth.sh@70 -- # digest=sha384 00:25:00.242 11:10:56 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:00.242 11:10:56 -- host/auth.sh@70 -- # keyid=3 00:25:00.242 11:10:56 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.242 11:10:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.242 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.242 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.242 11:10:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.242 11:10:56 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:00.242 11:10:56 -- nvmf/common.sh@717 -- # local ip 00:25:00.242 11:10:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.242 11:10:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.242 11:10:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.242 11:10:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.242 11:10:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.242 11:10:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.242 11:10:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.242 11:10:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.242 11:10:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.242 11:10:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.242 11:10:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.242 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.503 nvme0n1 00:25:00.503 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.503 11:10:57 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.503 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.503 11:10:57 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:00.503 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:00.503 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.503 11:10:57 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.503 11:10:57 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.503 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.503 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:00.503 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.503 11:10:57 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:00.503 11:10:57 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:00.503 11:10:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.503 11:10:57 -- host/auth.sh@44 -- # digest=sha384 00:25:00.503 11:10:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.503 11:10:57 -- host/auth.sh@44 -- # keyid=4 00:25:00.504 11:10:57 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:00.504 11:10:57 -- host/auth.sh@46 -- # ckey= 00:25:00.504 11:10:57 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.504 11:10:57 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.504 11:10:57 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:00.504 11:10:57 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.504 11:10:57 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:25:00.504 11:10:57 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:00.504 11:10:57 -- host/auth.sh@70 -- # digest=sha384 00:25:00.504 11:10:57 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:00.504 11:10:57 -- host/auth.sh@70 -- # keyid=4 00:25:00.504 11:10:57 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.504 11:10:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.504 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.504 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:00.504 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.504 11:10:57 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:00.504 11:10:57 -- nvmf/common.sh@717 -- # local ip 00:25:00.504 11:10:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.504 11:10:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.504 11:10:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.504 11:10:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.504 11:10:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.504 11:10:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.504 11:10:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.504 11:10:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.504 11:10:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.504 11:10:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.504 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.504 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 nvme0n1 00:25:00.764 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.764 11:10:57 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.764 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.764 11:10:57 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:00.764 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.764 11:10:57 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.764 11:10:57 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.764 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.764 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.764 11:10:57 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.764 11:10:57 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:00.764 11:10:57 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:00.764 11:10:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.764 11:10:57 -- host/auth.sh@44 -- # digest=sha384 00:25:00.764 11:10:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.764 11:10:57 -- host/auth.sh@44 -- # keyid=0 00:25:00.764 11:10:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:00.764 11:10:57 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:00.764 11:10:57 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.764 11:10:57 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.764 11:10:57 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:00.764 11:10:57 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:00.764 11:10:57 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:00.764 11:10:57 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:25:00.764 11:10:57 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:00.764 11:10:57 -- host/auth.sh@70 -- # digest=sha384 00:25:00.764 11:10:57 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:00.764 11:10:57 -- host/auth.sh@70 -- # keyid=0 00:25:00.764 11:10:57 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.764 11:10:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.764 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.764 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.764 11:10:57 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:00.764 11:10:57 -- nvmf/common.sh@717 -- # local ip 00:25:00.764 11:10:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.764 11:10:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.764 11:10:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.764 11:10:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.764 11:10:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.764 11:10:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.764 11:10:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.764 11:10:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.764 11:10:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.764 11:10:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.764 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.764 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.025 nvme0n1 00:25:01.025 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.025 11:10:57 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.025 11:10:57 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:01.025 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.025 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.025 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.025 11:10:57 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.025 11:10:57 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.025 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.025 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.025 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.025 11:10:57 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:01.025 11:10:57 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:01.025 11:10:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.025 11:10:57 -- host/auth.sh@44 -- # digest=sha384 00:25:01.025 11:10:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.025 11:10:57 -- host/auth.sh@44 -- # keyid=1 00:25:01.025 11:10:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:01.025 11:10:57 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:01.025 11:10:57 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.025 11:10:57 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.025 11:10:57 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:01.025 11:10:57 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:01.025 11:10:57 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:01.025 11:10:57 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:25:01.025 11:10:57 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:01.025 11:10:57 -- host/auth.sh@70 -- # digest=sha384 00:25:01.025 11:10:57 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:01.025 11:10:57 -- host/auth.sh@70 -- # keyid=1 00:25:01.025 11:10:57 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.025 11:10:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.025 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.025 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.025 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.025 11:10:57 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:01.025 11:10:57 -- nvmf/common.sh@717 -- # local ip 00:25:01.025 11:10:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.025 11:10:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.025 11:10:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.025 11:10:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.025 11:10:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.025 11:10:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.025 11:10:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.025 11:10:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.025 11:10:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.025 11:10:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.025 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.025 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.286 nvme0n1 00:25:01.286 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.286 11:10:57 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.286 11:10:57 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:01.286 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.286 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.286 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.286 11:10:57 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.286 11:10:57 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.286 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.286 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.286 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.286 11:10:57 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:01.286 11:10:57 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:01.286 11:10:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.286 11:10:57 -- host/auth.sh@44 -- # digest=sha384 00:25:01.286 11:10:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.286 11:10:57 -- host/auth.sh@44 -- # keyid=2 00:25:01.286 11:10:57 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:01.286 11:10:57 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:01.286 11:10:57 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.286 11:10:57 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.286 11:10:57 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:01.286 11:10:57 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:01.286 11:10:57 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:01.286 11:10:57 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:25:01.286 11:10:57 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:01.286 11:10:57 -- host/auth.sh@70 -- # digest=sha384 00:25:01.286 11:10:57 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:01.286 11:10:57 -- host/auth.sh@70 -- # keyid=2 00:25:01.286 11:10:57 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.286 11:10:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.286 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.286 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.286 11:10:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.286 11:10:57 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:01.286 11:10:57 -- nvmf/common.sh@717 -- # local ip 00:25:01.286 11:10:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.286 11:10:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.286 11:10:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.286 11:10:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.286 11:10:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.286 11:10:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.286 11:10:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.286 11:10:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.286 11:10:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.286 11:10:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.286 11:10:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.286 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:01.547 nvme0n1 00:25:01.547 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.547 11:10:58 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.547 11:10:58 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:01.547 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.547 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:01.547 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.547 11:10:58 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.547 11:10:58 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.547 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.547 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:01.547 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.547 11:10:58 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:01.547 11:10:58 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:01.547 11:10:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.547 11:10:58 -- host/auth.sh@44 -- # digest=sha384 00:25:01.547 11:10:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.547 11:10:58 -- host/auth.sh@44 -- # keyid=3 00:25:01.547 11:10:58 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:01.547 11:10:58 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:01.547 11:10:58 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.547 11:10:58 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.547 11:10:58 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:01.547 11:10:58 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:01.547 11:10:58 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:01.547 11:10:58 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:25:01.547 11:10:58 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:01.547 11:10:58 -- host/auth.sh@70 -- # digest=sha384 00:25:01.547 11:10:58 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:01.547 11:10:58 -- host/auth.sh@70 -- # keyid=3 00:25:01.547 11:10:58 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.547 11:10:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.547 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.547 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:01.547 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.547 11:10:58 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:01.547 11:10:58 -- nvmf/common.sh@717 -- # local ip 00:25:01.547 11:10:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.547 11:10:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.547 11:10:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.547 11:10:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.547 11:10:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.547 11:10:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.547 11:10:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.547 11:10:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.547 11:10:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.547 11:10:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:01.547 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.547 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:01.808 nvme0n1 00:25:01.808 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.808 11:10:58 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.808 11:10:58 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:01.808 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.808 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:01.808 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.808 11:10:58 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.808 11:10:58 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.808 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.808 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:01.808 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.808 11:10:58 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:01.808 11:10:58 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:01.808 11:10:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.808 11:10:58 -- host/auth.sh@44 -- # digest=sha384 00:25:01.808 11:10:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.808 11:10:58 -- host/auth.sh@44 -- # keyid=4 00:25:01.808 11:10:58 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:01.808 11:10:58 -- host/auth.sh@46 -- # ckey= 00:25:01.808 11:10:58 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.808 11:10:58 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.808 11:10:58 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:01.808 11:10:58 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.808 11:10:58 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:25:01.808 11:10:58 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:01.808 11:10:58 -- host/auth.sh@70 -- # digest=sha384 00:25:01.808 11:10:58 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:01.808 11:10:58 -- host/auth.sh@70 -- # keyid=4 00:25:01.808 11:10:58 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.808 11:10:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.808 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.808 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:01.808 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.808 11:10:58 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:01.808 11:10:58 -- nvmf/common.sh@717 -- # local ip 00:25:01.808 11:10:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.808 11:10:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.808 11:10:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.808 11:10:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.808 11:10:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.808 11:10:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.808 11:10:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.808 11:10:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.808 11:10:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.808 11:10:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.808 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.808 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:02.068 nvme0n1 00:25:02.068 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.068 11:10:58 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.068 11:10:58 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:02.068 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.068 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:02.068 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.068 11:10:58 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.068 11:10:58 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.068 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.068 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:02.068 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.068 11:10:58 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.068 11:10:58 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:02.068 11:10:58 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:02.068 11:10:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.068 11:10:58 -- host/auth.sh@44 -- # digest=sha384 00:25:02.068 11:10:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.068 11:10:58 -- host/auth.sh@44 -- # keyid=0 00:25:02.068 11:10:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:02.068 11:10:58 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:02.068 11:10:58 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.068 11:10:58 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.068 11:10:58 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:02.068 11:10:58 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:02.068 11:10:58 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:02.068 11:10:58 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:25:02.068 11:10:58 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:02.068 11:10:58 -- host/auth.sh@70 -- # digest=sha384 00:25:02.069 11:10:58 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:02.069 11:10:58 -- host/auth.sh@70 -- # keyid=0 00:25:02.069 11:10:58 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.069 11:10:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.069 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.069 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:02.069 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.069 11:10:58 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:02.069 11:10:58 -- nvmf/common.sh@717 -- # local ip 00:25:02.069 11:10:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.069 11:10:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.069 11:10:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.069 11:10:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.069 11:10:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.069 11:10:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.069 11:10:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.069 11:10:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.069 11:10:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.069 11:10:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.069 11:10:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.069 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:25:02.640 nvme0n1 00:25:02.640 11:10:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.640 11:10:58 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.640 11:10:58 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:02.640 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.640 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.640 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.640 11:10:59 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.640 11:10:59 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.640 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.640 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.640 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.640 11:10:59 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:02.640 11:10:59 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:02.640 11:10:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.640 11:10:59 -- host/auth.sh@44 -- # digest=sha384 00:25:02.640 11:10:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.640 11:10:59 -- host/auth.sh@44 -- # keyid=1 00:25:02.640 11:10:59 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:02.640 11:10:59 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:02.640 11:10:59 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.640 11:10:59 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.640 11:10:59 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:02.640 11:10:59 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:02.640 11:10:59 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:02.640 11:10:59 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:25:02.640 11:10:59 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:02.640 11:10:59 -- host/auth.sh@70 -- # digest=sha384 00:25:02.640 11:10:59 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:02.640 11:10:59 -- host/auth.sh@70 -- # keyid=1 00:25:02.640 11:10:59 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.640 11:10:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.640 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.640 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.640 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.640 11:10:59 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:02.640 11:10:59 -- nvmf/common.sh@717 -- # local ip 00:25:02.640 11:10:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.640 11:10:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.640 11:10:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.640 11:10:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.640 11:10:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.640 11:10:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.640 11:10:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.640 11:10:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.640 11:10:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.640 11:10:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.640 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.640 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.902 nvme0n1 00:25:02.902 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.902 11:10:59 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.902 11:10:59 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:02.902 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.902 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.902 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.902 11:10:59 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.902 11:10:59 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.902 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.902 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.902 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.902 11:10:59 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:02.902 11:10:59 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:02.902 11:10:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.902 11:10:59 -- host/auth.sh@44 -- # digest=sha384 00:25:02.902 11:10:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.902 11:10:59 -- host/auth.sh@44 -- # keyid=2 00:25:02.902 11:10:59 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:02.902 11:10:59 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:02.902 11:10:59 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.902 11:10:59 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.902 11:10:59 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:02.902 11:10:59 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:02.902 11:10:59 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:02.902 11:10:59 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:25:02.902 11:10:59 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:02.902 11:10:59 -- host/auth.sh@70 -- # digest=sha384 00:25:02.902 11:10:59 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:02.902 11:10:59 -- host/auth.sh@70 -- # keyid=2 00:25:02.902 11:10:59 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.902 11:10:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.902 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.902 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.902 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.902 11:10:59 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:02.902 11:10:59 -- nvmf/common.sh@717 -- # local ip 00:25:02.902 11:10:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.902 11:10:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.902 11:10:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.902 11:10:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.902 11:10:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.902 11:10:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.902 11:10:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.902 11:10:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.902 11:10:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.902 11:10:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.902 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.902 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:03.162 nvme0n1 00:25:03.162 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.162 11:10:59 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.162 11:10:59 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:03.162 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.162 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:03.162 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.162 11:10:59 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.162 11:10:59 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.162 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.162 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:03.162 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.162 11:10:59 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:03.162 11:10:59 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:03.162 11:10:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.162 11:10:59 -- host/auth.sh@44 -- # digest=sha384 00:25:03.162 11:10:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.162 11:10:59 -- host/auth.sh@44 -- # keyid=3 00:25:03.162 11:10:59 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:03.162 11:10:59 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:03.162 11:10:59 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.162 11:10:59 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.162 11:10:59 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:03.162 11:10:59 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:03.162 11:10:59 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:03.162 11:10:59 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:25:03.162 11:10:59 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:03.162 11:10:59 -- host/auth.sh@70 -- # digest=sha384 00:25:03.162 11:10:59 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:03.162 11:10:59 -- host/auth.sh@70 -- # keyid=3 00:25:03.162 11:10:59 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.162 11:10:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.162 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.162 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:03.162 11:10:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.162 11:10:59 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:03.162 11:10:59 -- nvmf/common.sh@717 -- # local ip 00:25:03.162 11:10:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.162 11:10:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.162 11:10:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.162 11:10:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.162 11:10:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.162 11:10:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.162 11:10:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.162 11:10:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.162 11:10:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.162 11:10:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.162 11:10:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.162 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:03.422 nvme0n1 00:25:03.422 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.422 11:11:00 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.422 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.422 11:11:00 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:03.422 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.422 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.680 11:11:00 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.680 11:11:00 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.680 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.680 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.680 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.680 11:11:00 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:03.680 11:11:00 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:03.680 11:11:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.680 11:11:00 -- host/auth.sh@44 -- # digest=sha384 00:25:03.680 11:11:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.680 11:11:00 -- host/auth.sh@44 -- # keyid=4 00:25:03.680 11:11:00 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:03.680 11:11:00 -- host/auth.sh@46 -- # ckey= 00:25:03.680 11:11:00 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.680 11:11:00 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.680 11:11:00 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:03.680 11:11:00 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.680 11:11:00 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:25:03.680 11:11:00 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:03.680 11:11:00 -- host/auth.sh@70 -- # digest=sha384 00:25:03.680 11:11:00 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:03.680 11:11:00 -- host/auth.sh@70 -- # keyid=4 00:25:03.680 11:11:00 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.680 11:11:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.680 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.680 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.680 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.680 11:11:00 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:03.680 11:11:00 -- nvmf/common.sh@717 -- # local ip 00:25:03.680 11:11:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.680 11:11:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.680 11:11:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.680 11:11:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.680 11:11:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.680 11:11:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.680 11:11:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.680 11:11:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.680 11:11:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.680 11:11:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.680 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.680 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 nvme0n1 00:25:03.939 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.939 11:11:00 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.939 11:11:00 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:03.939 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.939 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.939 11:11:00 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.939 11:11:00 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.939 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.939 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.939 11:11:00 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.939 11:11:00 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:03.939 11:11:00 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:03.939 11:11:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.939 11:11:00 -- host/auth.sh@44 -- # digest=sha384 00:25:03.939 11:11:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.939 11:11:00 -- host/auth.sh@44 -- # keyid=0 00:25:03.939 11:11:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:03.939 11:11:00 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:03.939 11:11:00 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.939 11:11:00 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.939 11:11:00 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:03.939 11:11:00 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:03.939 11:11:00 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:03.939 11:11:00 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:25:03.939 11:11:00 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:03.939 11:11:00 -- host/auth.sh@70 -- # digest=sha384 00:25:03.939 11:11:00 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:03.939 11:11:00 -- host/auth.sh@70 -- # keyid=0 00:25:03.939 11:11:00 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.939 11:11:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.939 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.939 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.939 11:11:00 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:03.939 11:11:00 -- nvmf/common.sh@717 -- # local ip 00:25:03.939 11:11:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.939 11:11:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.939 11:11:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.939 11:11:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.939 11:11:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.939 11:11:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.939 11:11:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.939 11:11:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.939 11:11:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.939 11:11:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.939 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.939 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:04.507 nvme0n1 00:25:04.507 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.507 11:11:00 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.507 11:11:00 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:04.507 11:11:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.507 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:25:04.507 11:11:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.507 11:11:01 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.507 11:11:01 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.507 11:11:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.507 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:25:04.507 11:11:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.507 11:11:01 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:04.507 11:11:01 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:04.507 11:11:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.507 11:11:01 -- host/auth.sh@44 -- # digest=sha384 00:25:04.507 11:11:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.507 11:11:01 -- host/auth.sh@44 -- # keyid=1 00:25:04.507 11:11:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:04.507 11:11:01 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:04.507 11:11:01 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.507 11:11:01 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.507 11:11:01 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:04.507 11:11:01 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:04.507 11:11:01 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:04.507 11:11:01 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:25:04.507 11:11:01 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:04.507 11:11:01 -- host/auth.sh@70 -- # digest=sha384 00:25:04.507 11:11:01 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:04.507 11:11:01 -- host/auth.sh@70 -- # keyid=1 00:25:04.507 11:11:01 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.507 11:11:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.507 11:11:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.507 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:25:04.507 11:11:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.507 11:11:01 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:04.507 11:11:01 -- nvmf/common.sh@717 -- # local ip 00:25:04.507 11:11:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.507 11:11:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.507 11:11:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.507 11:11:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.507 11:11:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.507 11:11:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.507 11:11:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.507 11:11:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.507 11:11:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.507 11:11:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.507 11:11:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.507 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:25:05.077 nvme0n1 00:25:05.077 11:11:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.077 11:11:01 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.077 11:11:01 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:05.077 11:11:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.077 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:25:05.077 11:11:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.077 11:11:01 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.077 11:11:01 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.077 11:11:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.077 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:25:05.077 11:11:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.077 11:11:01 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:05.077 11:11:01 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:05.077 11:11:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.077 11:11:01 -- host/auth.sh@44 -- # digest=sha384 00:25:05.077 11:11:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.077 11:11:01 -- host/auth.sh@44 -- # keyid=2 00:25:05.077 11:11:01 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:05.077 11:11:01 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:05.077 11:11:01 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.077 11:11:01 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.077 11:11:01 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:05.077 11:11:01 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:05.077 11:11:01 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:05.077 11:11:01 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:25:05.077 11:11:01 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:05.077 11:11:01 -- host/auth.sh@70 -- # digest=sha384 00:25:05.077 11:11:01 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:05.077 11:11:01 -- host/auth.sh@70 -- # keyid=2 00:25:05.077 11:11:01 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.077 11:11:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.077 11:11:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.077 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:25:05.077 11:11:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.077 11:11:01 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:05.077 11:11:01 -- nvmf/common.sh@717 -- # local ip 00:25:05.077 11:11:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.077 11:11:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.077 11:11:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.077 11:11:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.077 11:11:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.077 11:11:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.077 11:11:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.077 11:11:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.077 11:11:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.077 11:11:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.077 11:11:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.077 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:25:05.647 nvme0n1 00:25:05.647 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.647 11:11:02 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.647 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.647 11:11:02 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:05.647 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:05.647 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.647 11:11:02 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.647 11:11:02 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.647 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.647 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:05.647 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.647 11:11:02 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:05.647 11:11:02 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:05.647 11:11:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.647 11:11:02 -- host/auth.sh@44 -- # digest=sha384 00:25:05.647 11:11:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.647 11:11:02 -- host/auth.sh@44 -- # keyid=3 00:25:05.647 11:11:02 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:05.647 11:11:02 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:05.647 11:11:02 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.647 11:11:02 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.647 11:11:02 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:05.647 11:11:02 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:05.647 11:11:02 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:05.647 11:11:02 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:25:05.647 11:11:02 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:05.647 11:11:02 -- host/auth.sh@70 -- # digest=sha384 00:25:05.647 11:11:02 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:05.647 11:11:02 -- host/auth.sh@70 -- # keyid=3 00:25:05.647 11:11:02 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.647 11:11:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.647 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.647 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:05.647 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.647 11:11:02 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:05.647 11:11:02 -- nvmf/common.sh@717 -- # local ip 00:25:05.647 11:11:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.647 11:11:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.647 11:11:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.647 11:11:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.647 11:11:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.647 11:11:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.647 11:11:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.648 11:11:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.648 11:11:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.648 11:11:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.648 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.648 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:06.216 nvme0n1 00:25:06.216 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.216 11:11:02 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.216 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.216 11:11:02 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:06.216 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:06.216 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.217 11:11:02 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.217 11:11:02 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.217 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.217 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:06.217 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.217 11:11:02 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:06.217 11:11:02 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:06.217 11:11:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.217 11:11:02 -- host/auth.sh@44 -- # digest=sha384 00:25:06.217 11:11:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.217 11:11:02 -- host/auth.sh@44 -- # keyid=4 00:25:06.217 11:11:02 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:06.217 11:11:02 -- host/auth.sh@46 -- # ckey= 00:25:06.217 11:11:02 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.217 11:11:02 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:06.217 11:11:02 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:06.217 11:11:02 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.217 11:11:02 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:25:06.217 11:11:02 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:06.217 11:11:02 -- host/auth.sh@70 -- # digest=sha384 00:25:06.217 11:11:02 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:06.217 11:11:02 -- host/auth.sh@70 -- # keyid=4 00:25:06.217 11:11:02 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.217 11:11:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:06.217 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.217 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:06.217 11:11:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.217 11:11:02 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:06.217 11:11:02 -- nvmf/common.sh@717 -- # local ip 00:25:06.217 11:11:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:06.217 11:11:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:06.217 11:11:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.217 11:11:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.217 11:11:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:06.217 11:11:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.217 11:11:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:06.217 11:11:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:06.217 11:11:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:06.217 11:11:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.217 11:11:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.217 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:06.789 nvme0n1 00:25:06.789 11:11:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.789 11:11:03 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.789 11:11:03 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:06.789 11:11:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.789 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:25:06.789 11:11:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.789 11:11:03 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.789 11:11:03 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.789 11:11:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.789 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:25:06.789 11:11:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.789 11:11:03 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.789 11:11:03 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:06.789 11:11:03 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:06.789 11:11:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.789 11:11:03 -- host/auth.sh@44 -- # digest=sha384 00:25:06.789 11:11:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.789 11:11:03 -- host/auth.sh@44 -- # keyid=0 00:25:06.789 11:11:03 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:06.789 11:11:03 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:06.789 11:11:03 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.789 11:11:03 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.789 11:11:03 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:06.789 11:11:03 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:06.789 11:11:03 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:06.789 11:11:03 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:25:06.789 11:11:03 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:06.789 11:11:03 -- host/auth.sh@70 -- # digest=sha384 00:25:06.789 11:11:03 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:06.789 11:11:03 -- host/auth.sh@70 -- # keyid=0 00:25:06.789 11:11:03 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.789 11:11:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.789 11:11:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.789 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:25:06.789 11:11:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.789 11:11:03 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:06.789 11:11:03 -- nvmf/common.sh@717 -- # local ip 00:25:06.789 11:11:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:06.789 11:11:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:06.789 11:11:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.789 11:11:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.789 11:11:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:06.789 11:11:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.789 11:11:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:06.789 11:11:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:06.789 11:11:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:06.789 11:11:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.789 11:11:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.789 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:25:07.360 nvme0n1 00:25:07.360 11:11:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.360 11:11:03 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.360 11:11:03 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:07.360 11:11:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.360 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:25:07.360 11:11:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.620 11:11:04 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.620 11:11:04 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.620 11:11:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.620 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.620 11:11:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.620 11:11:04 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:07.620 11:11:04 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:07.620 11:11:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.620 11:11:04 -- host/auth.sh@44 -- # digest=sha384 00:25:07.620 11:11:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.620 11:11:04 -- host/auth.sh@44 -- # keyid=1 00:25:07.620 11:11:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:07.620 11:11:04 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:07.620 11:11:04 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.620 11:11:04 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.621 11:11:04 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:07.621 11:11:04 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:07.621 11:11:04 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:07.621 11:11:04 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:25:07.621 11:11:04 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:07.621 11:11:04 -- host/auth.sh@70 -- # digest=sha384 00:25:07.621 11:11:04 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:07.621 11:11:04 -- host/auth.sh@70 -- # keyid=1 00:25:07.621 11:11:04 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.621 11:11:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.621 11:11:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.621 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.621 11:11:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.621 11:11:04 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:07.621 11:11:04 -- nvmf/common.sh@717 -- # local ip 00:25:07.621 11:11:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:07.621 11:11:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:07.621 11:11:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.621 11:11:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.621 11:11:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:07.621 11:11:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.621 11:11:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:07.621 11:11:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:07.621 11:11:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:07.621 11:11:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.621 11:11:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.621 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:25:08.193 nvme0n1 00:25:08.194 11:11:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.194 11:11:04 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.194 11:11:04 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:08.194 11:11:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.194 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:25:08.194 11:11:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.455 11:11:04 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.455 11:11:04 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.455 11:11:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.455 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:25:08.455 11:11:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.455 11:11:04 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:08.455 11:11:04 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:08.455 11:11:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.455 11:11:04 -- host/auth.sh@44 -- # digest=sha384 00:25:08.455 11:11:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.455 11:11:04 -- host/auth.sh@44 -- # keyid=2 00:25:08.455 11:11:04 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:08.455 11:11:04 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:08.455 11:11:04 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.455 11:11:04 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.455 11:11:04 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:08.455 11:11:04 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:08.455 11:11:04 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:08.455 11:11:04 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:25:08.455 11:11:04 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:08.455 11:11:04 -- host/auth.sh@70 -- # digest=sha384 00:25:08.455 11:11:04 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:08.455 11:11:04 -- host/auth.sh@70 -- # keyid=2 00:25:08.455 11:11:04 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.455 11:11:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.455 11:11:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.455 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:25:08.455 11:11:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.455 11:11:04 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:08.455 11:11:04 -- nvmf/common.sh@717 -- # local ip 00:25:08.455 11:11:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:08.455 11:11:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:08.455 11:11:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.455 11:11:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.455 11:11:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:08.455 11:11:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.455 11:11:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:08.455 11:11:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:08.455 11:11:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:08.455 11:11:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.455 11:11:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.455 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:25:09.028 nvme0n1 00:25:09.028 11:11:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.028 11:11:05 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.028 11:11:05 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:09.028 11:11:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.028 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:25:09.028 11:11:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.288 11:11:05 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.288 11:11:05 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.288 11:11:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.288 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:25:09.288 11:11:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.288 11:11:05 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:09.288 11:11:05 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:09.288 11:11:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.288 11:11:05 -- host/auth.sh@44 -- # digest=sha384 00:25:09.288 11:11:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.288 11:11:05 -- host/auth.sh@44 -- # keyid=3 00:25:09.288 11:11:05 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:09.288 11:11:05 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:09.288 11:11:05 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.288 11:11:05 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.288 11:11:05 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:09.288 11:11:05 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:09.288 11:11:05 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:09.288 11:11:05 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:25:09.288 11:11:05 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:09.288 11:11:05 -- host/auth.sh@70 -- # digest=sha384 00:25:09.288 11:11:05 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:09.288 11:11:05 -- host/auth.sh@70 -- # keyid=3 00:25:09.288 11:11:05 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.288 11:11:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:09.288 11:11:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.288 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:25:09.288 11:11:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.288 11:11:05 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:09.288 11:11:05 -- nvmf/common.sh@717 -- # local ip 00:25:09.288 11:11:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:09.288 11:11:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:09.288 11:11:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.288 11:11:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.288 11:11:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:09.288 11:11:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.288 11:11:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:09.288 11:11:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:09.288 11:11:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:09.288 11:11:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.288 11:11:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.288 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:25:09.860 nvme0n1 00:25:09.860 11:11:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.860 11:11:06 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.860 11:11:06 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:09.860 11:11:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.860 11:11:06 -- common/autotest_common.sh@10 -- # set +x 00:25:09.860 11:11:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.121 11:11:06 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.121 11:11:06 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.121 11:11:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.121 11:11:06 -- common/autotest_common.sh@10 -- # set +x 00:25:10.121 11:11:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.121 11:11:06 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:10.121 11:11:06 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:10.121 11:11:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.121 11:11:06 -- host/auth.sh@44 -- # digest=sha384 00:25:10.121 11:11:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.121 11:11:06 -- host/auth.sh@44 -- # keyid=4 00:25:10.121 11:11:06 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:10.121 11:11:06 -- host/auth.sh@46 -- # ckey= 00:25:10.121 11:11:06 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.121 11:11:06 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.121 11:11:06 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:10.121 11:11:06 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.121 11:11:06 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:25:10.121 11:11:06 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:10.121 11:11:06 -- host/auth.sh@70 -- # digest=sha384 00:25:10.121 11:11:06 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:10.121 11:11:06 -- host/auth.sh@70 -- # keyid=4 00:25:10.121 11:11:06 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.121 11:11:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:10.121 11:11:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.121 11:11:06 -- common/autotest_common.sh@10 -- # set +x 00:25:10.121 11:11:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.121 11:11:06 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:10.121 11:11:06 -- nvmf/common.sh@717 -- # local ip 00:25:10.121 11:11:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.121 11:11:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.121 11:11:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.121 11:11:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.121 11:11:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.121 11:11:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.121 11:11:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.121 11:11:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.121 11:11:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.121 11:11:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.121 11:11:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.121 11:11:06 -- common/autotest_common.sh@10 -- # set +x 00:25:10.691 nvme0n1 00:25:10.691 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.691 11:11:07 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.691 11:11:07 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:10.691 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.691 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:10.691 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.952 11:11:07 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.952 11:11:07 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.952 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.952 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.952 11:11:07 -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:25:10.952 11:11:07 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.952 11:11:07 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:10.952 11:11:07 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:10.952 11:11:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.952 11:11:07 -- host/auth.sh@44 -- # digest=sha512 00:25:10.952 11:11:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.952 11:11:07 -- host/auth.sh@44 -- # keyid=0 00:25:10.952 11:11:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:10.952 11:11:07 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:10.952 11:11:07 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.952 11:11:07 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.952 11:11:07 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:10.952 11:11:07 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:10.952 11:11:07 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:10.952 11:11:07 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:25:10.952 11:11:07 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:10.952 11:11:07 -- host/auth.sh@70 -- # digest=sha512 00:25:10.952 11:11:07 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:10.952 11:11:07 -- host/auth.sh@70 -- # keyid=0 00:25:10.952 11:11:07 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.952 11:11:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:10.952 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.952 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.952 11:11:07 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:10.952 11:11:07 -- nvmf/common.sh@717 -- # local ip 00:25:10.952 11:11:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.952 11:11:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.952 11:11:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.952 11:11:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.952 11:11:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.952 11:11:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.952 11:11:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.952 11:11:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.952 11:11:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.952 11:11:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.953 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.953 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:10.953 nvme0n1 00:25:10.953 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.953 11:11:07 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.953 11:11:07 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:10.953 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.953 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:10.953 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.953 11:11:07 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.953 11:11:07 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.953 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.953 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:11.214 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.214 11:11:07 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:11.214 11:11:07 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:11.214 11:11:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.214 11:11:07 -- host/auth.sh@44 -- # digest=sha512 00:25:11.214 11:11:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.214 11:11:07 -- host/auth.sh@44 -- # keyid=1 00:25:11.214 11:11:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:11.214 11:11:07 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:11.214 11:11:07 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.214 11:11:07 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.214 11:11:07 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:11.215 11:11:07 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:11.215 11:11:07 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:11.215 11:11:07 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:25:11.215 11:11:07 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:11.215 11:11:07 -- host/auth.sh@70 -- # digest=sha512 00:25:11.215 11:11:07 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:11.215 11:11:07 -- host/auth.sh@70 -- # keyid=1 00:25:11.215 11:11:07 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.215 11:11:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.215 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.215 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:11.215 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.215 11:11:07 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:11.215 11:11:07 -- nvmf/common.sh@717 -- # local ip 00:25:11.215 11:11:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.215 11:11:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.215 11:11:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.215 11:11:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.215 11:11:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.215 11:11:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.215 11:11:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.215 11:11:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.215 11:11:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.215 11:11:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.215 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.215 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:11.215 nvme0n1 00:25:11.215 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.215 11:11:07 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.215 11:11:07 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:11.215 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.215 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:11.215 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.215 11:11:07 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.215 11:11:07 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.215 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.215 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:11.215 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.215 11:11:07 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:11.215 11:11:07 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:11.215 11:11:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.215 11:11:07 -- host/auth.sh@44 -- # digest=sha512 00:25:11.215 11:11:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.215 11:11:07 -- host/auth.sh@44 -- # keyid=2 00:25:11.215 11:11:07 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:11.215 11:11:07 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:11.215 11:11:07 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.215 11:11:07 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.215 11:11:07 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:11.215 11:11:07 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:11.215 11:11:07 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:11.215 11:11:07 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:25:11.215 11:11:07 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:11.215 11:11:07 -- host/auth.sh@70 -- # digest=sha512 00:25:11.215 11:11:07 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:11.215 11:11:07 -- host/auth.sh@70 -- # keyid=2 00:25:11.215 11:11:07 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.215 11:11:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.215 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.215 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:11.476 11:11:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.476 11:11:07 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:11.476 11:11:07 -- nvmf/common.sh@717 -- # local ip 00:25:11.476 11:11:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.476 11:11:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.476 11:11:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.476 11:11:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.476 11:11:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.476 11:11:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.476 11:11:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.476 11:11:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.476 11:11:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.476 11:11:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.476 11:11:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.476 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:25:11.476 nvme0n1 00:25:11.476 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.476 11:11:08 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.476 11:11:08 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:11.476 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.476 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:11.476 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.476 11:11:08 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.476 11:11:08 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.476 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.476 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:11.476 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.476 11:11:08 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:11.476 11:11:08 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:11.476 11:11:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.476 11:11:08 -- host/auth.sh@44 -- # digest=sha512 00:25:11.476 11:11:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.476 11:11:08 -- host/auth.sh@44 -- # keyid=3 00:25:11.476 11:11:08 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:11.476 11:11:08 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:11.476 11:11:08 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.476 11:11:08 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.476 11:11:08 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:11.476 11:11:08 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:11.476 11:11:08 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:11.476 11:11:08 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:25:11.476 11:11:08 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:11.476 11:11:08 -- host/auth.sh@70 -- # digest=sha512 00:25:11.477 11:11:08 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:11.477 11:11:08 -- host/auth.sh@70 -- # keyid=3 00:25:11.477 11:11:08 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.477 11:11:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.477 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.477 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:11.477 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.477 11:11:08 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:11.477 11:11:08 -- nvmf/common.sh@717 -- # local ip 00:25:11.477 11:11:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.477 11:11:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.477 11:11:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.477 11:11:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.477 11:11:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.477 11:11:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.477 11:11:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.477 11:11:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.477 11:11:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.477 11:11:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.477 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.477 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:11.739 nvme0n1 00:25:11.739 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.739 11:11:08 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.739 11:11:08 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:11.739 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.739 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:11.739 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.739 11:11:08 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.739 11:11:08 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.739 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.739 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:11.739 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.739 11:11:08 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:11.739 11:11:08 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:11.739 11:11:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.739 11:11:08 -- host/auth.sh@44 -- # digest=sha512 00:25:11.739 11:11:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.739 11:11:08 -- host/auth.sh@44 -- # keyid=4 00:25:11.739 11:11:08 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:11.739 11:11:08 -- host/auth.sh@46 -- # ckey= 00:25:11.739 11:11:08 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.739 11:11:08 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.739 11:11:08 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:11.739 11:11:08 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.739 11:11:08 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:25:11.739 11:11:08 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:11.739 11:11:08 -- host/auth.sh@70 -- # digest=sha512 00:25:11.739 11:11:08 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:11.739 11:11:08 -- host/auth.sh@70 -- # keyid=4 00:25:11.739 11:11:08 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.739 11:11:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.739 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.739 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:11.739 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.739 11:11:08 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:11.739 11:11:08 -- nvmf/common.sh@717 -- # local ip 00:25:11.739 11:11:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.739 11:11:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.739 11:11:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.739 11:11:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.739 11:11:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.739 11:11:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.739 11:11:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.739 11:11:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.739 11:11:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.739 11:11:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.739 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.739 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.001 nvme0n1 00:25:12.001 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.001 11:11:08 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.001 11:11:08 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:12.001 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.001 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.001 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.001 11:11:08 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.001 11:11:08 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.001 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.001 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.001 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.001 11:11:08 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.001 11:11:08 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:12.001 11:11:08 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:12.001 11:11:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.001 11:11:08 -- host/auth.sh@44 -- # digest=sha512 00:25:12.001 11:11:08 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.001 11:11:08 -- host/auth.sh@44 -- # keyid=0 00:25:12.001 11:11:08 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:12.001 11:11:08 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:12.001 11:11:08 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.001 11:11:08 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.001 11:11:08 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:12.001 11:11:08 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:12.001 11:11:08 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:12.001 11:11:08 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:25:12.001 11:11:08 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:12.001 11:11:08 -- host/auth.sh@70 -- # digest=sha512 00:25:12.001 11:11:08 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:12.001 11:11:08 -- host/auth.sh@70 -- # keyid=0 00:25:12.001 11:11:08 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.001 11:11:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.001 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.001 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.001 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.001 11:11:08 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:12.001 11:11:08 -- nvmf/common.sh@717 -- # local ip 00:25:12.001 11:11:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:12.001 11:11:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:12.001 11:11:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.001 11:11:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.001 11:11:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:12.002 11:11:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.002 11:11:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:12.002 11:11:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:12.002 11:11:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:12.002 11:11:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.002 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.002 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.262 nvme0n1 00:25:12.262 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.262 11:11:08 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.262 11:11:08 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:12.262 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.262 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.262 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.262 11:11:08 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.262 11:11:08 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.262 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.262 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.262 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.262 11:11:08 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:12.262 11:11:08 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:12.262 11:11:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.262 11:11:08 -- host/auth.sh@44 -- # digest=sha512 00:25:12.262 11:11:08 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.262 11:11:08 -- host/auth.sh@44 -- # keyid=1 00:25:12.262 11:11:08 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:12.262 11:11:08 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:12.262 11:11:08 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.262 11:11:08 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.262 11:11:08 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:12.262 11:11:08 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:12.262 11:11:08 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:12.262 11:11:08 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:25:12.262 11:11:08 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:12.262 11:11:08 -- host/auth.sh@70 -- # digest=sha512 00:25:12.262 11:11:08 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:12.262 11:11:08 -- host/auth.sh@70 -- # keyid=1 00:25:12.262 11:11:08 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.262 11:11:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.262 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.262 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.262 11:11:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.262 11:11:08 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:12.262 11:11:08 -- nvmf/common.sh@717 -- # local ip 00:25:12.262 11:11:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:12.263 11:11:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:12.263 11:11:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.263 11:11:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.263 11:11:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:12.263 11:11:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.263 11:11:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:12.263 11:11:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:12.263 11:11:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:12.263 11:11:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.263 11:11:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.263 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.523 nvme0n1 00:25:12.523 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.523 11:11:09 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.523 11:11:09 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:12.523 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.523 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.523 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.523 11:11:09 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.523 11:11:09 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.523 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.523 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.523 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.523 11:11:09 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:12.523 11:11:09 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:12.523 11:11:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.523 11:11:09 -- host/auth.sh@44 -- # digest=sha512 00:25:12.523 11:11:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.523 11:11:09 -- host/auth.sh@44 -- # keyid=2 00:25:12.523 11:11:09 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:12.523 11:11:09 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:12.523 11:11:09 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.523 11:11:09 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.523 11:11:09 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:12.523 11:11:09 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:12.523 11:11:09 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:12.523 11:11:09 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:25:12.523 11:11:09 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:12.523 11:11:09 -- host/auth.sh@70 -- # digest=sha512 00:25:12.523 11:11:09 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:12.523 11:11:09 -- host/auth.sh@70 -- # keyid=2 00:25:12.523 11:11:09 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.523 11:11:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.523 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.523 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.523 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.523 11:11:09 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:12.523 11:11:09 -- nvmf/common.sh@717 -- # local ip 00:25:12.523 11:11:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:12.523 11:11:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:12.523 11:11:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.523 11:11:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.524 11:11:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:12.524 11:11:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.524 11:11:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:12.524 11:11:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:12.524 11:11:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:12.524 11:11:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.524 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.524 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.786 nvme0n1 00:25:12.786 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.786 11:11:09 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.786 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.786 11:11:09 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:12.786 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.786 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.786 11:11:09 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.786 11:11:09 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.786 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.786 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.786 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.786 11:11:09 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:12.786 11:11:09 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:12.786 11:11:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.786 11:11:09 -- host/auth.sh@44 -- # digest=sha512 00:25:12.786 11:11:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.786 11:11:09 -- host/auth.sh@44 -- # keyid=3 00:25:12.786 11:11:09 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:12.786 11:11:09 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:12.786 11:11:09 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.786 11:11:09 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.786 11:11:09 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:12.786 11:11:09 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:12.786 11:11:09 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:12.786 11:11:09 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:25:12.786 11:11:09 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:12.786 11:11:09 -- host/auth.sh@70 -- # digest=sha512 00:25:12.786 11:11:09 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:12.786 11:11:09 -- host/auth.sh@70 -- # keyid=3 00:25:12.786 11:11:09 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.786 11:11:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.786 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.786 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.786 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.786 11:11:09 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:12.786 11:11:09 -- nvmf/common.sh@717 -- # local ip 00:25:12.787 11:11:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:12.787 11:11:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:12.787 11:11:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.787 11:11:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.787 11:11:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:12.787 11:11:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.787 11:11:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:12.787 11:11:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:12.787 11:11:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:12.787 11:11:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.787 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.787 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.047 nvme0n1 00:25:13.047 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.047 11:11:09 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.047 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.047 11:11:09 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:13.047 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.048 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.048 11:11:09 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.048 11:11:09 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.048 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.048 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.048 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.048 11:11:09 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:13.048 11:11:09 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:13.048 11:11:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.048 11:11:09 -- host/auth.sh@44 -- # digest=sha512 00:25:13.048 11:11:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.048 11:11:09 -- host/auth.sh@44 -- # keyid=4 00:25:13.048 11:11:09 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:13.048 11:11:09 -- host/auth.sh@46 -- # ckey= 00:25:13.048 11:11:09 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.048 11:11:09 -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.048 11:11:09 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:13.048 11:11:09 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.048 11:11:09 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:25:13.048 11:11:09 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:13.048 11:11:09 -- host/auth.sh@70 -- # digest=sha512 00:25:13.048 11:11:09 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:13.048 11:11:09 -- host/auth.sh@70 -- # keyid=4 00:25:13.048 11:11:09 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.048 11:11:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:13.048 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.048 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.048 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.048 11:11:09 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:13.048 11:11:09 -- nvmf/common.sh@717 -- # local ip 00:25:13.048 11:11:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:13.048 11:11:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:13.048 11:11:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.048 11:11:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.048 11:11:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:13.048 11:11:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.048 11:11:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:13.048 11:11:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:13.048 11:11:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:13.048 11:11:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.048 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.048 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.310 nvme0n1 00:25:13.310 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.310 11:11:09 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.310 11:11:09 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:13.310 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.310 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.311 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.311 11:11:09 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.311 11:11:09 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.311 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.311 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.311 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.311 11:11:09 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.311 11:11:09 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:13.311 11:11:09 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:13.311 11:11:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.311 11:11:09 -- host/auth.sh@44 -- # digest=sha512 00:25:13.311 11:11:09 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.311 11:11:09 -- host/auth.sh@44 -- # keyid=0 00:25:13.311 11:11:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:13.311 11:11:09 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:13.311 11:11:09 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.311 11:11:09 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.311 11:11:09 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:13.311 11:11:09 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:13.311 11:11:09 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:13.311 11:11:09 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:25:13.311 11:11:09 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:13.311 11:11:09 -- host/auth.sh@70 -- # digest=sha512 00:25:13.311 11:11:09 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:13.311 11:11:09 -- host/auth.sh@70 -- # keyid=0 00:25:13.311 11:11:09 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.311 11:11:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.311 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.311 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.311 11:11:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.311 11:11:09 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:13.311 11:11:09 -- nvmf/common.sh@717 -- # local ip 00:25:13.311 11:11:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:13.311 11:11:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:13.311 11:11:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.311 11:11:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.311 11:11:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:13.311 11:11:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.311 11:11:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:13.311 11:11:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:13.311 11:11:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:13.311 11:11:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.311 11:11:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.311 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.573 nvme0n1 00:25:13.574 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.574 11:11:10 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.574 11:11:10 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:13.574 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.574 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:13.574 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.836 11:11:10 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.837 11:11:10 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.837 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.837 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:13.837 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.837 11:11:10 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:13.837 11:11:10 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:13.837 11:11:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.837 11:11:10 -- host/auth.sh@44 -- # digest=sha512 00:25:13.837 11:11:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.837 11:11:10 -- host/auth.sh@44 -- # keyid=1 00:25:13.837 11:11:10 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:13.837 11:11:10 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:13.837 11:11:10 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.837 11:11:10 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.837 11:11:10 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:13.837 11:11:10 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:13.837 11:11:10 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:13.837 11:11:10 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:25:13.837 11:11:10 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:13.837 11:11:10 -- host/auth.sh@70 -- # digest=sha512 00:25:13.837 11:11:10 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:13.837 11:11:10 -- host/auth.sh@70 -- # keyid=1 00:25:13.837 11:11:10 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.837 11:11:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.837 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.837 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:13.837 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.837 11:11:10 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:13.837 11:11:10 -- nvmf/common.sh@717 -- # local ip 00:25:13.837 11:11:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:13.837 11:11:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:13.837 11:11:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.837 11:11:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.837 11:11:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:13.837 11:11:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.837 11:11:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:13.837 11:11:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:13.837 11:11:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:13.837 11:11:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.837 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.837 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.098 nvme0n1 00:25:14.098 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.098 11:11:10 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.098 11:11:10 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:14.098 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.098 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.098 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.098 11:11:10 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.098 11:11:10 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.098 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.098 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.098 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.098 11:11:10 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:14.098 11:11:10 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:14.098 11:11:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.098 11:11:10 -- host/auth.sh@44 -- # digest=sha512 00:25:14.098 11:11:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.098 11:11:10 -- host/auth.sh@44 -- # keyid=2 00:25:14.098 11:11:10 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:14.098 11:11:10 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:14.098 11:11:10 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.098 11:11:10 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.098 11:11:10 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:14.098 11:11:10 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:14.098 11:11:10 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:14.098 11:11:10 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:25:14.098 11:11:10 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:14.098 11:11:10 -- host/auth.sh@70 -- # digest=sha512 00:25:14.098 11:11:10 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:14.098 11:11:10 -- host/auth.sh@70 -- # keyid=2 00:25:14.098 11:11:10 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.098 11:11:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:14.098 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.098 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.098 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.098 11:11:10 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:14.098 11:11:10 -- nvmf/common.sh@717 -- # local ip 00:25:14.098 11:11:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:14.098 11:11:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:14.098 11:11:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.098 11:11:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.098 11:11:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:14.098 11:11:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.098 11:11:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:14.098 11:11:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:14.098 11:11:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:14.098 11:11:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.098 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.098 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.359 nvme0n1 00:25:14.359 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.359 11:11:10 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.359 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.359 11:11:10 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:14.359 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.359 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.359 11:11:10 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.359 11:11:10 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.359 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.359 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.359 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.359 11:11:10 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:14.359 11:11:10 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:14.359 11:11:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.359 11:11:10 -- host/auth.sh@44 -- # digest=sha512 00:25:14.359 11:11:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.359 11:11:10 -- host/auth.sh@44 -- # keyid=3 00:25:14.359 11:11:10 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:14.359 11:11:10 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:14.359 11:11:10 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.359 11:11:10 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.359 11:11:10 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:14.359 11:11:10 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:14.359 11:11:10 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:14.359 11:11:10 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:25:14.359 11:11:10 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:14.359 11:11:10 -- host/auth.sh@70 -- # digest=sha512 00:25:14.359 11:11:10 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:14.359 11:11:10 -- host/auth.sh@70 -- # keyid=3 00:25:14.359 11:11:10 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.359 11:11:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:14.359 11:11:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.359 11:11:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.359 11:11:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.359 11:11:10 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:14.359 11:11:10 -- nvmf/common.sh@717 -- # local ip 00:25:14.359 11:11:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:14.359 11:11:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:14.359 11:11:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.359 11:11:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.359 11:11:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:14.359 11:11:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.359 11:11:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:14.360 11:11:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:14.360 11:11:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:14.360 11:11:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.360 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.360 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.943 nvme0n1 00:25:14.943 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.943 11:11:11 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.943 11:11:11 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:14.943 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.943 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.943 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.943 11:11:11 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.943 11:11:11 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.943 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.943 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.943 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.943 11:11:11 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:14.943 11:11:11 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:14.943 11:11:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.943 11:11:11 -- host/auth.sh@44 -- # digest=sha512 00:25:14.943 11:11:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.943 11:11:11 -- host/auth.sh@44 -- # keyid=4 00:25:14.943 11:11:11 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:14.943 11:11:11 -- host/auth.sh@46 -- # ckey= 00:25:14.943 11:11:11 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.943 11:11:11 -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.943 11:11:11 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:14.943 11:11:11 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.943 11:11:11 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:25:14.943 11:11:11 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:14.943 11:11:11 -- host/auth.sh@70 -- # digest=sha512 00:25:14.943 11:11:11 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:14.943 11:11:11 -- host/auth.sh@70 -- # keyid=4 00:25:14.943 11:11:11 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.943 11:11:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:14.943 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.943 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.943 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.943 11:11:11 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:14.943 11:11:11 -- nvmf/common.sh@717 -- # local ip 00:25:14.943 11:11:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:14.943 11:11:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:14.943 11:11:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.943 11:11:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.943 11:11:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:14.943 11:11:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.943 11:11:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:14.943 11:11:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:14.943 11:11:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:14.943 11:11:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.943 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.943 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:15.241 nvme0n1 00:25:15.241 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.241 11:11:11 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.241 11:11:11 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:15.241 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.241 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:15.241 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.241 11:11:11 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.241 11:11:11 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.241 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.241 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:15.241 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.241 11:11:11 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.241 11:11:11 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:15.241 11:11:11 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:15.241 11:11:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.241 11:11:11 -- host/auth.sh@44 -- # digest=sha512 00:25:15.241 11:11:11 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.241 11:11:11 -- host/auth.sh@44 -- # keyid=0 00:25:15.241 11:11:11 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:15.241 11:11:11 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:15.241 11:11:11 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.241 11:11:11 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.241 11:11:11 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:15.241 11:11:11 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:15.241 11:11:11 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:15.241 11:11:11 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:25:15.241 11:11:11 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:15.241 11:11:11 -- host/auth.sh@70 -- # digest=sha512 00:25:15.241 11:11:11 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:15.241 11:11:11 -- host/auth.sh@70 -- # keyid=0 00:25:15.241 11:11:11 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.241 11:11:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.241 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.241 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:15.241 11:11:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.241 11:11:11 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:15.241 11:11:11 -- nvmf/common.sh@717 -- # local ip 00:25:15.241 11:11:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:15.241 11:11:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:15.241 11:11:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.241 11:11:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.241 11:11:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:15.241 11:11:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.241 11:11:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:15.241 11:11:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:15.241 11:11:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:15.241 11:11:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.241 11:11:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.241 11:11:11 -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 nvme0n1 00:25:15.837 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.837 11:11:12 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.837 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.837 11:11:12 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:15.837 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.837 11:11:12 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.837 11:11:12 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.837 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.837 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.837 11:11:12 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:15.837 11:11:12 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:15.837 11:11:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.837 11:11:12 -- host/auth.sh@44 -- # digest=sha512 00:25:15.837 11:11:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.837 11:11:12 -- host/auth.sh@44 -- # keyid=1 00:25:15.837 11:11:12 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:15.837 11:11:12 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:15.837 11:11:12 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.837 11:11:12 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.837 11:11:12 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:15.837 11:11:12 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:15.837 11:11:12 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:15.837 11:11:12 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:25:15.837 11:11:12 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:15.837 11:11:12 -- host/auth.sh@70 -- # digest=sha512 00:25:15.837 11:11:12 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:15.837 11:11:12 -- host/auth.sh@70 -- # keyid=1 00:25:15.837 11:11:12 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.837 11:11:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.837 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.837 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.837 11:11:12 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:15.837 11:11:12 -- nvmf/common.sh@717 -- # local ip 00:25:15.837 11:11:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:15.837 11:11:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:15.837 11:11:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.837 11:11:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.837 11:11:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:15.837 11:11:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.837 11:11:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:15.837 11:11:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:15.837 11:11:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:15.837 11:11:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.837 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.837 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:16.120 nvme0n1 00:25:16.120 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.120 11:11:12 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.120 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.120 11:11:12 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:16.120 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:16.120 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 11:11:12 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.401 11:11:12 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.401 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.401 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:16.401 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 11:11:12 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:16.401 11:11:12 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:16.401 11:11:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.401 11:11:12 -- host/auth.sh@44 -- # digest=sha512 00:25:16.401 11:11:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.401 11:11:12 -- host/auth.sh@44 -- # keyid=2 00:25:16.401 11:11:12 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:16.401 11:11:12 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:16.401 11:11:12 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.401 11:11:12 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.401 11:11:12 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:16.401 11:11:12 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:16.401 11:11:12 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:16.401 11:11:12 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:25:16.401 11:11:12 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:16.401 11:11:12 -- host/auth.sh@70 -- # digest=sha512 00:25:16.401 11:11:12 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:16.401 11:11:12 -- host/auth.sh@70 -- # keyid=2 00:25:16.401 11:11:12 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.401 11:11:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:16.401 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.401 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:16.401 11:11:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 11:11:12 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:16.401 11:11:12 -- nvmf/common.sh@717 -- # local ip 00:25:16.401 11:11:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:16.401 11:11:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:16.401 11:11:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.401 11:11:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.401 11:11:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:16.401 11:11:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.401 11:11:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:16.401 11:11:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:16.401 11:11:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:16.402 11:11:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.402 11:11:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.402 11:11:12 -- common/autotest_common.sh@10 -- # set +x 00:25:16.673 nvme0n1 00:25:16.673 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.673 11:11:13 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.673 11:11:13 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:16.673 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.673 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:16.673 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.943 11:11:13 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.943 11:11:13 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.943 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.943 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:16.943 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.943 11:11:13 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:16.943 11:11:13 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:16.943 11:11:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.943 11:11:13 -- host/auth.sh@44 -- # digest=sha512 00:25:16.943 11:11:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.943 11:11:13 -- host/auth.sh@44 -- # keyid=3 00:25:16.943 11:11:13 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:16.943 11:11:13 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:16.943 11:11:13 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.943 11:11:13 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.943 11:11:13 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:16.943 11:11:13 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:16.943 11:11:13 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:16.943 11:11:13 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:25:16.943 11:11:13 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:16.943 11:11:13 -- host/auth.sh@70 -- # digest=sha512 00:25:16.943 11:11:13 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:16.943 11:11:13 -- host/auth.sh@70 -- # keyid=3 00:25:16.943 11:11:13 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.943 11:11:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:16.943 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.943 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:16.943 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.943 11:11:13 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:16.943 11:11:13 -- nvmf/common.sh@717 -- # local ip 00:25:16.943 11:11:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:16.943 11:11:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:16.943 11:11:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.943 11:11:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.943 11:11:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:16.943 11:11:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.943 11:11:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:16.943 11:11:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:16.943 11:11:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:16.943 11:11:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.943 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.943 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:17.220 nvme0n1 00:25:17.220 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.220 11:11:13 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.220 11:11:13 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:17.220 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.220 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:17.220 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.506 11:11:13 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.506 11:11:13 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.506 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.506 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:17.506 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.506 11:11:13 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:17.506 11:11:13 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:17.506 11:11:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.506 11:11:13 -- host/auth.sh@44 -- # digest=sha512 00:25:17.506 11:11:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.506 11:11:13 -- host/auth.sh@44 -- # keyid=4 00:25:17.506 11:11:13 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:17.506 11:11:13 -- host/auth.sh@46 -- # ckey= 00:25:17.506 11:11:13 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.506 11:11:13 -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.506 11:11:13 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:17.506 11:11:13 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.506 11:11:13 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:25:17.506 11:11:13 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:17.506 11:11:13 -- host/auth.sh@70 -- # digest=sha512 00:25:17.506 11:11:13 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:17.506 11:11:13 -- host/auth.sh@70 -- # keyid=4 00:25:17.506 11:11:13 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.506 11:11:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:17.506 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.506 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:17.506 11:11:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.506 11:11:13 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:17.506 11:11:13 -- nvmf/common.sh@717 -- # local ip 00:25:17.506 11:11:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:17.506 11:11:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:17.506 11:11:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.506 11:11:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.506 11:11:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:17.506 11:11:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.506 11:11:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:17.506 11:11:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:17.506 11:11:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:17.506 11:11:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.506 11:11:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.506 11:11:13 -- common/autotest_common.sh@10 -- # set +x 00:25:17.792 nvme0n1 00:25:17.792 11:11:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.792 11:11:14 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.792 11:11:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.792 11:11:14 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:17.792 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:17.792 11:11:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.792 11:11:14 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.792 11:11:14 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.792 11:11:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.792 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:17.792 11:11:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.070 11:11:14 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.070 11:11:14 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:18.070 11:11:14 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:18.070 11:11:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.070 11:11:14 -- host/auth.sh@44 -- # digest=sha512 00:25:18.070 11:11:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.070 11:11:14 -- host/auth.sh@44 -- # keyid=0 00:25:18.070 11:11:14 -- host/auth.sh@45 -- # key=DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:18.070 11:11:14 -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:18.070 11:11:14 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.070 11:11:14 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.070 11:11:14 -- host/auth.sh@50 -- # echo DHHC-1:00:MWIzMDI3Yzg2ZjBhNTdiODM4YTFmM2MxM2U0YmI2YzVtff7L: 00:25:18.070 11:11:14 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: ]] 00:25:18.070 11:11:14 -- host/auth.sh@51 -- # echo DHHC-1:03:M2NiOTY0NWUwYTlmN2YwMTRmZTJjMmNmODE2YjI4OGIyYzhjNmNmYTlkNzYwZGYwNjgwYWVkMGZmYWQ5MTBjN+35aow=: 00:25:18.070 11:11:14 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:25:18.070 11:11:14 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:18.070 11:11:14 -- host/auth.sh@70 -- # digest=sha512 00:25:18.070 11:11:14 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:18.070 11:11:14 -- host/auth.sh@70 -- # keyid=0 00:25:18.070 11:11:14 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.070 11:11:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:18.070 11:11:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.070 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:18.070 11:11:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.070 11:11:14 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:18.070 11:11:14 -- nvmf/common.sh@717 -- # local ip 00:25:18.070 11:11:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:18.070 11:11:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:18.070 11:11:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.070 11:11:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.070 11:11:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:18.070 11:11:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.070 11:11:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:18.070 11:11:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:18.070 11:11:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:18.070 11:11:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.070 11:11:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.070 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:18.674 nvme0n1 00:25:18.674 11:11:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.674 11:11:15 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.674 11:11:15 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:18.674 11:11:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.674 11:11:15 -- common/autotest_common.sh@10 -- # set +x 00:25:18.674 11:11:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.674 11:11:15 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.674 11:11:15 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.674 11:11:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.674 11:11:15 -- common/autotest_common.sh@10 -- # set +x 00:25:18.674 11:11:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.674 11:11:15 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:18.674 11:11:15 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:18.674 11:11:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.674 11:11:15 -- host/auth.sh@44 -- # digest=sha512 00:25:18.674 11:11:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.674 11:11:15 -- host/auth.sh@44 -- # keyid=1 00:25:18.674 11:11:15 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:18.674 11:11:15 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:18.674 11:11:15 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.674 11:11:15 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.674 11:11:15 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:18.674 11:11:15 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:18.674 11:11:15 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:18.674 11:11:15 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:25:18.674 11:11:15 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:18.674 11:11:15 -- host/auth.sh@70 -- # digest=sha512 00:25:18.674 11:11:15 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:18.674 11:11:15 -- host/auth.sh@70 -- # keyid=1 00:25:18.674 11:11:15 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.674 11:11:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:18.674 11:11:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.674 11:11:15 -- common/autotest_common.sh@10 -- # set +x 00:25:18.674 11:11:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.674 11:11:15 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:18.674 11:11:15 -- nvmf/common.sh@717 -- # local ip 00:25:18.674 11:11:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:18.674 11:11:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:18.674 11:11:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.674 11:11:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.674 11:11:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:18.674 11:11:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.674 11:11:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:18.674 11:11:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:18.674 11:11:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:18.674 11:11:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.674 11:11:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.674 11:11:15 -- common/autotest_common.sh@10 -- # set +x 00:25:19.685 nvme0n1 00:25:19.685 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.685 11:11:16 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.685 11:11:16 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:19.685 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.685 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:19.685 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.685 11:11:16 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.685 11:11:16 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.685 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.685 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:19.685 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.685 11:11:16 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:19.685 11:11:16 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:19.685 11:11:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.685 11:11:16 -- host/auth.sh@44 -- # digest=sha512 00:25:19.685 11:11:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.685 11:11:16 -- host/auth.sh@44 -- # keyid=2 00:25:19.685 11:11:16 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:19.685 11:11:16 -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:19.685 11:11:16 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.685 11:11:16 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.685 11:11:16 -- host/auth.sh@50 -- # echo DHHC-1:01:MWI0OWZhMjdmMDFhYjZlNGQxMGY0MmFjMTlhZjFkNDhEiRsu: 00:25:19.685 11:11:16 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: ]] 00:25:19.685 11:11:16 -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg5ZjAyNmJkMDQxOWZjOTQ4NTdiMTk3YjIzNmFjNmZKylf+: 00:25:19.685 11:11:16 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:25:19.685 11:11:16 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:19.685 11:11:16 -- host/auth.sh@70 -- # digest=sha512 00:25:19.685 11:11:16 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:19.685 11:11:16 -- host/auth.sh@70 -- # keyid=2 00:25:19.685 11:11:16 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.685 11:11:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:19.685 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.685 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:19.685 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.685 11:11:16 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:19.685 11:11:16 -- nvmf/common.sh@717 -- # local ip 00:25:19.685 11:11:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.685 11:11:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.685 11:11:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.685 11:11:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.685 11:11:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.685 11:11:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.685 11:11:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.685 11:11:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.685 11:11:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.685 11:11:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.685 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.685 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:20.288 nvme0n1 00:25:20.288 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.288 11:11:16 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.288 11:11:16 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:20.288 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.288 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:20.288 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.288 11:11:16 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.288 11:11:16 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.288 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.288 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:20.565 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.565 11:11:16 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:20.565 11:11:16 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:20.565 11:11:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.565 11:11:16 -- host/auth.sh@44 -- # digest=sha512 00:25:20.565 11:11:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.565 11:11:16 -- host/auth.sh@44 -- # keyid=3 00:25:20.565 11:11:16 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:20.565 11:11:16 -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:20.565 11:11:16 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.565 11:11:16 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.565 11:11:16 -- host/auth.sh@50 -- # echo DHHC-1:02:ZjA3MzljM2I4MjBmOWU1ZDliMGEyMDNjZWRhN2EyNGJkOTAwZTE0ZTYwMjM3NGM0krRolA==: 00:25:20.565 11:11:16 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: ]] 00:25:20.565 11:11:16 -- host/auth.sh@51 -- # echo DHHC-1:00:MTczYWY3YTc3MGJlMmEzOGY0ODQ5OWZlN2ZhM2I3ZGaWibIz: 00:25:20.565 11:11:16 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:25:20.565 11:11:16 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:20.565 11:11:16 -- host/auth.sh@70 -- # digest=sha512 00:25:20.565 11:11:16 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:20.565 11:11:16 -- host/auth.sh@70 -- # keyid=3 00:25:20.565 11:11:16 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.565 11:11:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:20.565 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.565 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:20.565 11:11:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.565 11:11:16 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:20.565 11:11:16 -- nvmf/common.sh@717 -- # local ip 00:25:20.565 11:11:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.565 11:11:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.565 11:11:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.565 11:11:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.565 11:11:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.565 11:11:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.565 11:11:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.565 11:11:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.565 11:11:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.565 11:11:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.565 11:11:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.565 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:25:21.174 nvme0n1 00:25:21.174 11:11:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.174 11:11:17 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.174 11:11:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.174 11:11:17 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:21.174 11:11:17 -- common/autotest_common.sh@10 -- # set +x 00:25:21.174 11:11:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.174 11:11:17 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.174 11:11:17 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.174 11:11:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.174 11:11:17 -- common/autotest_common.sh@10 -- # set +x 00:25:21.174 11:11:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.174 11:11:17 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:21.174 11:11:17 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:21.174 11:11:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.174 11:11:17 -- host/auth.sh@44 -- # digest=sha512 00:25:21.174 11:11:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.174 11:11:17 -- host/auth.sh@44 -- # keyid=4 00:25:21.174 11:11:17 -- host/auth.sh@45 -- # key=DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:21.175 11:11:17 -- host/auth.sh@46 -- # ckey= 00:25:21.175 11:11:17 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.175 11:11:17 -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.175 11:11:17 -- host/auth.sh@50 -- # echo DHHC-1:03:MjQyOTEyMjAyYzk5Njk5ODY4NmFiYTFmZGRjYzkzNmFjNGRmNTQ2YTAyMjFiNGNhYjlmMzkzZTFmMDNiNDg4NROfG78=: 00:25:21.175 11:11:17 -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.175 11:11:17 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:25:21.175 11:11:17 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:21.175 11:11:17 -- host/auth.sh@70 -- # digest=sha512 00:25:21.175 11:11:17 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:25:21.175 11:11:17 -- host/auth.sh@70 -- # keyid=4 00:25:21.175 11:11:17 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.175 11:11:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:21.175 11:11:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.175 11:11:17 -- common/autotest_common.sh@10 -- # set +x 00:25:21.175 11:11:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.175 11:11:17 -- host/auth.sh@74 -- # get_main_ns_ip 00:25:21.175 11:11:17 -- nvmf/common.sh@717 -- # local ip 00:25:21.175 11:11:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.175 11:11:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.175 11:11:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.175 11:11:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.175 11:11:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.175 11:11:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.175 11:11:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.175 11:11:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.175 11:11:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.175 11:11:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.175 11:11:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.175 11:11:17 -- common/autotest_common.sh@10 -- # set +x 00:25:22.191 nvme0n1 00:25:22.191 11:11:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.191 11:11:18 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.191 11:11:18 -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:22.191 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.191 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.191 11:11:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.191 11:11:18 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.191 11:11:18 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.191 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.191 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.191 11:11:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.191 11:11:18 -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.191 11:11:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.191 11:11:18 -- host/auth.sh@44 -- # digest=sha256 00:25:22.191 11:11:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.191 11:11:18 -- host/auth.sh@44 -- # keyid=1 00:25:22.191 11:11:18 -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:22.191 11:11:18 -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:22.191 11:11:18 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.191 11:11:18 -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.191 11:11:18 -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkZDg1ODQ5ZjAxYmU2MzdjNmM0ZmViNWY2M2Y4Njk3ZjJkNjQ1MDBhNWM2ZTMxryID8w==: 00:25:22.191 11:11:18 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: ]] 00:25:22.191 11:11:18 -- host/auth.sh@51 -- # echo DHHC-1:02:MjQyM2M0MGM4NWNiYWM4YjZlNDhhOTQyZDgzOGQyZGY4M2UzOWI5NmY4OWM2ODJm+hGTgA==: 00:25:22.191 11:11:18 -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.191 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.191 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.191 11:11:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.191 11:11:18 -- host/auth.sh@125 -- # get_main_ns_ip 00:25:22.191 11:11:18 -- nvmf/common.sh@717 -- # local ip 00:25:22.191 11:11:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.191 11:11:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.191 11:11:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.191 11:11:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.191 11:11:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.191 11:11:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.191 11:11:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.191 11:11:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.191 11:11:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.191 11:11:18 -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:22.191 11:11:18 -- common/autotest_common.sh@648 -- # local es=0 00:25:22.191 11:11:18 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:22.191 11:11:18 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:22.191 11:11:18 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.191 11:11:18 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:22.191 11:11:18 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.191 11:11:18 -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:22.191 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.191 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.191 request: 00:25:22.191 { 00:25:22.192 "name": "nvme0", 00:25:22.192 "trtype": "tcp", 00:25:22.192 "traddr": "10.0.0.1", 00:25:22.192 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:22.192 "adrfam": "ipv4", 00:25:22.192 "trsvcid": "4420", 00:25:22.192 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:22.192 "method": "bdev_nvme_attach_controller", 00:25:22.192 "req_id": 1 00:25:22.192 } 00:25:22.192 Got JSON-RPC error response 00:25:22.192 response: 00:25:22.192 { 00:25:22.192 "code": -32602, 00:25:22.192 "message": "Invalid parameters" 00:25:22.192 } 00:25:22.192 11:11:18 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:22.192 11:11:18 -- common/autotest_common.sh@651 -- # es=1 00:25:22.192 11:11:18 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:22.192 11:11:18 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:22.192 11:11:18 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:22.192 11:11:18 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.192 11:11:18 -- host/auth.sh@127 -- # jq length 00:25:22.192 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.192 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.192 11:11:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.192 11:11:18 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:25:22.192 11:11:18 -- host/auth.sh@130 -- # get_main_ns_ip 00:25:22.192 11:11:18 -- nvmf/common.sh@717 -- # local ip 00:25:22.192 11:11:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.192 11:11:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.192 11:11:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.192 11:11:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.192 11:11:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.192 11:11:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.192 11:11:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.192 11:11:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.192 11:11:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.192 11:11:18 -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:22.192 11:11:18 -- common/autotest_common.sh@648 -- # local es=0 00:25:22.192 11:11:18 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:22.192 11:11:18 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:22.192 11:11:18 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.192 11:11:18 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:22.192 11:11:18 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.192 11:11:18 -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:22.192 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.192 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.192 request: 00:25:22.192 { 00:25:22.192 "name": "nvme0", 00:25:22.192 "trtype": "tcp", 00:25:22.192 "traddr": "10.0.0.1", 00:25:22.192 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:22.192 "adrfam": "ipv4", 00:25:22.192 "trsvcid": "4420", 00:25:22.192 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:22.192 "dhchap_key": "key2", 00:25:22.192 "method": "bdev_nvme_attach_controller", 00:25:22.192 "req_id": 1 00:25:22.192 } 00:25:22.192 Got JSON-RPC error response 00:25:22.192 response: 00:25:22.192 { 00:25:22.192 "code": -32602, 00:25:22.192 "message": "Invalid parameters" 00:25:22.192 } 00:25:22.192 11:11:18 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:22.192 11:11:18 -- common/autotest_common.sh@651 -- # es=1 00:25:22.192 11:11:18 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:22.192 11:11:18 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:22.192 11:11:18 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:22.192 11:11:18 -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.192 11:11:18 -- host/auth.sh@133 -- # jq length 00:25:22.192 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.192 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.460 11:11:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.460 11:11:18 -- host/auth.sh@133 -- # (( 0 == 0 )) 00:25:22.460 11:11:18 -- host/auth.sh@136 -- # get_main_ns_ip 00:25:22.460 11:11:18 -- nvmf/common.sh@717 -- # local ip 00:25:22.460 11:11:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.460 11:11:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.460 11:11:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.460 11:11:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.460 11:11:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.460 11:11:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.460 11:11:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.460 11:11:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.460 11:11:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.460 11:11:18 -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:22.460 11:11:18 -- common/autotest_common.sh@648 -- # local es=0 00:25:22.460 11:11:18 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:22.460 11:11:18 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:22.460 11:11:18 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.460 11:11:18 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:22.460 11:11:18 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.460 11:11:18 -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:22.460 11:11:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.460 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.460 request: 00:25:22.460 { 00:25:22.460 "name": "nvme0", 00:25:22.460 "trtype": "tcp", 00:25:22.460 "traddr": "10.0.0.1", 00:25:22.460 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:22.460 "adrfam": "ipv4", 00:25:22.460 "trsvcid": "4420", 00:25:22.460 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:22.460 "dhchap_key": "key1", 00:25:22.460 "dhchap_ctrlr_key": "ckey2", 00:25:22.460 "method": "bdev_nvme_attach_controller", 00:25:22.460 "req_id": 1 00:25:22.460 } 00:25:22.460 Got JSON-RPC error response 00:25:22.460 response: 00:25:22.460 { 00:25:22.460 "code": -32602, 00:25:22.460 "message": "Invalid parameters" 00:25:22.460 } 00:25:22.460 11:11:18 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:22.460 11:11:18 -- common/autotest_common.sh@651 -- # es=1 00:25:22.460 11:11:18 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:22.460 11:11:18 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:22.460 11:11:18 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:22.460 11:11:18 -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:25:22.460 11:11:18 -- host/auth.sh@141 -- # cleanup 00:25:22.460 11:11:18 -- host/auth.sh@24 -- # nvmftestfini 00:25:22.460 11:11:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:22.460 11:11:18 -- nvmf/common.sh@117 -- # sync 00:25:22.460 11:11:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.460 11:11:18 -- nvmf/common.sh@120 -- # set +e 00:25:22.460 11:11:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.461 11:11:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.461 rmmod nvme_tcp 00:25:22.461 rmmod nvme_fabrics 00:25:22.461 11:11:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.461 11:11:18 -- nvmf/common.sh@124 -- # set -e 00:25:22.461 11:11:18 -- nvmf/common.sh@125 -- # return 0 00:25:22.461 11:11:18 -- nvmf/common.sh@478 -- # '[' -n 472127 ']' 00:25:22.461 11:11:18 -- nvmf/common.sh@479 -- # killprocess 472127 00:25:22.461 11:11:18 -- common/autotest_common.sh@946 -- # '[' -z 472127 ']' 00:25:22.461 11:11:18 -- common/autotest_common.sh@950 -- # kill -0 472127 00:25:22.461 11:11:18 -- common/autotest_common.sh@951 -- # uname 00:25:22.461 11:11:18 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:22.461 11:11:18 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 472127 00:25:22.461 11:11:19 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:22.461 11:11:19 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:22.461 11:11:19 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 472127' 00:25:22.461 killing process with pid 472127 00:25:22.461 11:11:19 -- common/autotest_common.sh@965 -- # kill 472127 00:25:22.461 11:11:19 -- common/autotest_common.sh@970 -- # wait 472127 00:25:22.758 11:11:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:22.758 11:11:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:22.758 11:11:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:22.758 11:11:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.758 11:11:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.758 11:11:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.758 11:11:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.758 11:11:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.766 11:11:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.766 11:11:21 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:24.766 11:11:21 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:24.766 11:11:21 -- host/auth.sh@27 -- # clean_kernel_target 00:25:24.766 11:11:21 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:24.766 11:11:21 -- nvmf/common.sh@675 -- # echo 0 00:25:24.766 11:11:21 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.766 11:11:21 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:24.766 11:11:21 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:24.766 11:11:21 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.766 11:11:21 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:24.766 11:11:21 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:24.766 11:11:21 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:28.073 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:28.073 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:28.073 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:28.073 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:28.334 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:25:28.906 11:11:25 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.sY8 /tmp/spdk.key-null.QAI /tmp/spdk.key-sha256.k0W /tmp/spdk.key-sha384.akI /tmp/spdk.key-sha512.fhi /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:28.906 11:11:25 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:32.208 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:25:32.208 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:32.208 00:25:32.208 real 1m3.937s 00:25:32.208 user 0m57.809s 00:25:32.208 sys 0m14.829s 00:25:32.208 11:11:28 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.208 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:25:32.208 ************************************ 00:25:32.208 END TEST nvmf_auth 00:25:32.208 ************************************ 00:25:32.208 11:11:28 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:25:32.208 11:11:28 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:32.208 11:11:28 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:32.208 11:11:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.208 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:25:32.208 ************************************ 00:25:32.208 START TEST nvmf_digest 00:25:32.208 ************************************ 00:25:32.208 11:11:28 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:32.208 * Looking for test storage... 00:25:32.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.208 11:11:28 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.208 11:11:28 -- nvmf/common.sh@7 -- # uname -s 00:25:32.208 11:11:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.208 11:11:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.208 11:11:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.208 11:11:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.208 11:11:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.208 11:11:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.208 11:11:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.208 11:11:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.208 11:11:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.209 11:11:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.209 11:11:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.209 11:11:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.209 11:11:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.209 11:11:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.209 11:11:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.209 11:11:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.209 11:11:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.209 11:11:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.209 11:11:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.209 11:11:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.209 11:11:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.209 11:11:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.209 11:11:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.209 11:11:28 -- paths/export.sh@5 -- # export PATH 00:25:32.209 11:11:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.209 11:11:28 -- nvmf/common.sh@47 -- # : 0 00:25:32.209 11:11:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.209 11:11:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.209 11:11:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.209 11:11:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.209 11:11:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.209 11:11:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.209 11:11:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.209 11:11:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.209 11:11:28 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:32.209 11:11:28 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:32.209 11:11:28 -- host/digest.sh@16 -- # runtime=2 00:25:32.209 11:11:28 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:32.209 11:11:28 -- host/digest.sh@138 -- # nvmftestinit 00:25:32.209 11:11:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:32.209 11:11:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.209 11:11:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:32.209 11:11:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:32.209 11:11:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:32.209 11:11:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.209 11:11:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.209 11:11:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.209 11:11:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:32.209 11:11:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:32.209 11:11:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.209 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:25:40.353 11:11:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.353 11:11:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.353 11:11:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.353 11:11:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.353 11:11:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.353 11:11:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.353 11:11:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.353 11:11:35 -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.353 11:11:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.353 11:11:35 -- nvmf/common.sh@296 -- # e810=() 00:25:40.353 11:11:35 -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.353 11:11:35 -- nvmf/common.sh@297 -- # x722=() 00:25:40.353 11:11:35 -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.353 11:11:35 -- nvmf/common.sh@298 -- # mlx=() 00:25:40.353 11:11:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.353 11:11:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.353 11:11:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.353 11:11:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.354 11:11:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.354 11:11:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.354 11:11:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.354 11:11:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.354 11:11:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:40.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:40.354 11:11:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.354 11:11:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:40.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:40.354 11:11:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.354 11:11:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.354 11:11:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.354 11:11:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.354 11:11:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.354 11:11:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:40.354 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:40.354 11:11:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.354 11:11:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.354 11:11:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.354 11:11:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.354 11:11:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.354 11:11:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:40.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:40.354 11:11:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.354 11:11:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:40.354 11:11:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:40.354 11:11:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:40.354 11:11:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.354 11:11:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.354 11:11:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.354 11:11:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.354 11:11:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.354 11:11:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.354 11:11:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.354 11:11:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.354 11:11:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.354 11:11:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.354 11:11:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.354 11:11:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.354 11:11:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.354 11:11:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.354 11:11:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.354 11:11:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.354 11:11:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.354 11:11:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.354 11:11:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.354 11:11:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:25:40.354 00:25:40.354 --- 10.0.0.2 ping statistics --- 00:25:40.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.354 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:25:40.354 11:11:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:25:40.354 00:25:40.354 --- 10.0.0.1 ping statistics --- 00:25:40.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.354 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:40.354 11:11:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.354 11:11:35 -- nvmf/common.sh@411 -- # return 0 00:25:40.354 11:11:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:40.354 11:11:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.354 11:11:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:40.354 11:11:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.354 11:11:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:40.354 11:11:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:40.354 11:11:35 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:40.354 11:11:35 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:40.354 11:11:35 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:40.354 11:11:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:40.354 11:11:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:40.354 11:11:35 -- common/autotest_common.sh@10 -- # set +x 00:25:40.354 ************************************ 00:25:40.354 START TEST nvmf_digest_clean 00:25:40.354 ************************************ 00:25:40.354 11:11:36 -- common/autotest_common.sh@1121 -- # run_digest 00:25:40.354 11:11:36 -- host/digest.sh@120 -- # local dsa_initiator 00:25:40.354 11:11:36 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:40.354 11:11:36 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:40.354 11:11:36 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:40.354 11:11:36 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:40.354 11:11:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:40.354 11:11:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:40.354 11:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:40.354 11:11:36 -- nvmf/common.sh@470 -- # nvmfpid=490057 00:25:40.354 11:11:36 -- nvmf/common.sh@471 -- # waitforlisten 490057 00:25:40.354 11:11:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:40.354 11:11:36 -- common/autotest_common.sh@827 -- # '[' -z 490057 ']' 00:25:40.354 11:11:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.354 11:11:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:40.354 11:11:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.354 11:11:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:40.354 11:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:40.354 [2024-05-15 11:11:36.103022] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:25:40.354 [2024-05-15 11:11:36.103080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.354 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.354 [2024-05-15 11:11:36.172640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.354 [2024-05-15 11:11:36.245955] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.354 [2024-05-15 11:11:36.245994] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.354 [2024-05-15 11:11:36.246001] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.354 [2024-05-15 11:11:36.246008] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.354 [2024-05-15 11:11:36.246014] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.354 [2024-05-15 11:11:36.246031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.354 11:11:36 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:40.354 11:11:36 -- common/autotest_common.sh@860 -- # return 0 00:25:40.354 11:11:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:40.354 11:11:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.354 11:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:40.354 11:11:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.354 11:11:36 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:40.354 11:11:36 -- host/digest.sh@126 -- # common_target_config 00:25:40.354 11:11:36 -- host/digest.sh@43 -- # rpc_cmd 00:25:40.354 11:11:36 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.354 11:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:40.354 null0 00:25:40.354 [2024-05-15 11:11:36.980776] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.354 [2024-05-15 11:11:37.004799] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:40.354 [2024-05-15 11:11:37.004998] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.615 11:11:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.615 11:11:37 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:40.615 11:11:37 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:40.615 11:11:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:40.615 11:11:37 -- host/digest.sh@80 -- # rw=randread 00:25:40.615 11:11:37 -- host/digest.sh@80 -- # bs=4096 00:25:40.615 11:11:37 -- host/digest.sh@80 -- # qd=128 00:25:40.615 11:11:37 -- host/digest.sh@80 -- # scan_dsa=false 00:25:40.615 11:11:37 -- host/digest.sh@83 -- # bperfpid=490400 00:25:40.615 11:11:37 -- host/digest.sh@84 -- # waitforlisten 490400 /var/tmp/bperf.sock 00:25:40.615 11:11:37 -- common/autotest_common.sh@827 -- # '[' -z 490400 ']' 00:25:40.615 11:11:37 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:40.615 11:11:37 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:40.615 11:11:37 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:40.615 11:11:37 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:40.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:40.615 11:11:37 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:40.615 11:11:37 -- common/autotest_common.sh@10 -- # set +x 00:25:40.615 [2024-05-15 11:11:37.059217] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:25:40.615 [2024-05-15 11:11:37.059263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490400 ] 00:25:40.615 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.615 [2024-05-15 11:11:37.135100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.615 [2024-05-15 11:11:37.199067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.187 11:11:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:41.187 11:11:37 -- common/autotest_common.sh@860 -- # return 0 00:25:41.187 11:11:37 -- host/digest.sh@86 -- # false 00:25:41.187 11:11:37 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:41.187 11:11:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:41.448 11:11:38 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.448 11:11:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.709 nvme0n1 00:25:41.709 11:11:38 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:41.709 11:11:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:41.970 Running I/O for 2 seconds... 00:25:43.903 00:25:43.903 Latency(us) 00:25:43.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.904 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:43.904 nvme0n1 : 2.00 20407.33 79.72 0.00 0.00 6265.43 2648.75 18896.21 00:25:43.904 =================================================================================================================== 00:25:43.904 Total : 20407.33 79.72 0.00 0.00 6265.43 2648.75 18896.21 00:25:43.904 0 00:25:43.904 11:11:40 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:43.904 11:11:40 -- host/digest.sh@93 -- # get_accel_stats 00:25:43.904 11:11:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:43.904 11:11:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:43.904 | select(.opcode=="crc32c") 00:25:43.904 | "\(.module_name) \(.executed)"' 00:25:43.904 11:11:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:44.171 11:11:40 -- host/digest.sh@94 -- # false 00:25:44.171 11:11:40 -- host/digest.sh@94 -- # exp_module=software 00:25:44.171 11:11:40 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:44.171 11:11:40 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:44.171 11:11:40 -- host/digest.sh@98 -- # killprocess 490400 00:25:44.171 11:11:40 -- common/autotest_common.sh@946 -- # '[' -z 490400 ']' 00:25:44.171 11:11:40 -- common/autotest_common.sh@950 -- # kill -0 490400 00:25:44.171 11:11:40 -- common/autotest_common.sh@951 -- # uname 00:25:44.171 11:11:40 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:44.171 11:11:40 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 490400 00:25:44.171 11:11:40 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:44.171 11:11:40 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:44.171 11:11:40 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 490400' 00:25:44.171 killing process with pid 490400 00:25:44.171 11:11:40 -- common/autotest_common.sh@965 -- # kill 490400 00:25:44.171 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.171 00:25:44.171 Latency(us) 00:25:44.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.171 =================================================================================================================== 00:25:44.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.171 11:11:40 -- common/autotest_common.sh@970 -- # wait 490400 00:25:44.171 11:11:40 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:44.171 11:11:40 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:44.171 11:11:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:44.171 11:11:40 -- host/digest.sh@80 -- # rw=randread 00:25:44.171 11:11:40 -- host/digest.sh@80 -- # bs=131072 00:25:44.171 11:11:40 -- host/digest.sh@80 -- # qd=16 00:25:44.171 11:11:40 -- host/digest.sh@80 -- # scan_dsa=false 00:25:44.171 11:11:40 -- host/digest.sh@83 -- # bperfpid=491090 00:25:44.171 11:11:40 -- host/digest.sh@84 -- # waitforlisten 491090 /var/tmp/bperf.sock 00:25:44.171 11:11:40 -- common/autotest_common.sh@827 -- # '[' -z 491090 ']' 00:25:44.171 11:11:40 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:44.171 11:11:40 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:44.171 11:11:40 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:44.171 11:11:40 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:44.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:44.171 11:11:40 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:44.171 11:11:40 -- common/autotest_common.sh@10 -- # set +x 00:25:44.171 [2024-05-15 11:11:40.805522] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:25:44.171 [2024-05-15 11:11:40.805580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491090 ] 00:25:44.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:44.171 Zero copy mechanism will not be used. 00:25:44.433 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.433 [2024-05-15 11:11:40.880075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.433 [2024-05-15 11:11:40.932670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.003 11:11:41 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:45.003 11:11:41 -- common/autotest_common.sh@860 -- # return 0 00:25:45.003 11:11:41 -- host/digest.sh@86 -- # false 00:25:45.003 11:11:41 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:45.003 11:11:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:45.263 11:11:41 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.263 11:11:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.523 nvme0n1 00:25:45.524 11:11:42 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:45.524 11:11:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:45.524 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.524 Zero copy mechanism will not be used. 00:25:45.524 Running I/O for 2 seconds... 00:25:48.071 00:25:48.071 Latency(us) 00:25:48.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.071 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:48.071 nvme0n1 : 2.00 3571.75 446.47 0.00 0.00 4476.78 1024.00 9338.88 00:25:48.071 =================================================================================================================== 00:25:48.071 Total : 3571.75 446.47 0.00 0.00 4476.78 1024.00 9338.88 00:25:48.071 0 00:25:48.071 11:11:44 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:48.071 11:11:44 -- host/digest.sh@93 -- # get_accel_stats 00:25:48.071 11:11:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:48.071 11:11:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:48.071 | select(.opcode=="crc32c") 00:25:48.071 | "\(.module_name) \(.executed)"' 00:25:48.071 11:11:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:48.071 11:11:44 -- host/digest.sh@94 -- # false 00:25:48.071 11:11:44 -- host/digest.sh@94 -- # exp_module=software 00:25:48.071 11:11:44 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:48.071 11:11:44 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:48.071 11:11:44 -- host/digest.sh@98 -- # killprocess 491090 00:25:48.071 11:11:44 -- common/autotest_common.sh@946 -- # '[' -z 491090 ']' 00:25:48.071 11:11:44 -- common/autotest_common.sh@950 -- # kill -0 491090 00:25:48.071 11:11:44 -- common/autotest_common.sh@951 -- # uname 00:25:48.071 11:11:44 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:48.071 11:11:44 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 491090 00:25:48.071 11:11:44 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:48.071 11:11:44 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:48.071 11:11:44 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 491090' 00:25:48.071 killing process with pid 491090 00:25:48.071 11:11:44 -- common/autotest_common.sh@965 -- # kill 491090 00:25:48.071 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.071 00:25:48.071 Latency(us) 00:25:48.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.071 =================================================================================================================== 00:25:48.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.071 11:11:44 -- common/autotest_common.sh@970 -- # wait 491090 00:25:48.071 11:11:44 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:48.071 11:11:44 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:48.071 11:11:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:48.071 11:11:44 -- host/digest.sh@80 -- # rw=randwrite 00:25:48.071 11:11:44 -- host/digest.sh@80 -- # bs=4096 00:25:48.071 11:11:44 -- host/digest.sh@80 -- # qd=128 00:25:48.071 11:11:44 -- host/digest.sh@80 -- # scan_dsa=false 00:25:48.071 11:11:44 -- host/digest.sh@83 -- # bperfpid=491769 00:25:48.071 11:11:44 -- host/digest.sh@84 -- # waitforlisten 491769 /var/tmp/bperf.sock 00:25:48.071 11:11:44 -- common/autotest_common.sh@827 -- # '[' -z 491769 ']' 00:25:48.071 11:11:44 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:48.071 11:11:44 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.071 11:11:44 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:48.071 11:11:44 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.071 11:11:44 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:48.071 11:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:48.071 [2024-05-15 11:11:44.512296] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:25:48.071 [2024-05-15 11:11:44.512352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491769 ] 00:25:48.071 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.071 [2024-05-15 11:11:44.586185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.071 [2024-05-15 11:11:44.639285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.643 11:11:45 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:48.643 11:11:45 -- common/autotest_common.sh@860 -- # return 0 00:25:48.643 11:11:45 -- host/digest.sh@86 -- # false 00:25:48.643 11:11:45 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:48.643 11:11:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:48.903 11:11:45 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.903 11:11:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.164 nvme0n1 00:25:49.425 11:11:45 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:49.425 11:11:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.425 Running I/O for 2 seconds... 00:25:51.340 00:25:51.340 Latency(us) 00:25:51.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.340 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:51.340 nvme0n1 : 2.00 22540.96 88.05 0.00 0.00 5673.44 2116.27 13544.11 00:25:51.340 =================================================================================================================== 00:25:51.340 Total : 22540.96 88.05 0.00 0.00 5673.44 2116.27 13544.11 00:25:51.340 0 00:25:51.340 11:11:47 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:51.340 11:11:47 -- host/digest.sh@93 -- # get_accel_stats 00:25:51.340 11:11:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:51.340 11:11:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:51.340 | select(.opcode=="crc32c") 00:25:51.340 | "\(.module_name) \(.executed)"' 00:25:51.340 11:11:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:51.601 11:11:48 -- host/digest.sh@94 -- # false 00:25:51.601 11:11:48 -- host/digest.sh@94 -- # exp_module=software 00:25:51.601 11:11:48 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:51.601 11:11:48 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:51.601 11:11:48 -- host/digest.sh@98 -- # killprocess 491769 00:25:51.601 11:11:48 -- common/autotest_common.sh@946 -- # '[' -z 491769 ']' 00:25:51.601 11:11:48 -- common/autotest_common.sh@950 -- # kill -0 491769 00:25:51.601 11:11:48 -- common/autotest_common.sh@951 -- # uname 00:25:51.601 11:11:48 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:51.601 11:11:48 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 491769 00:25:51.601 11:11:48 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:51.601 11:11:48 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:51.601 11:11:48 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 491769' 00:25:51.601 killing process with pid 491769 00:25:51.601 11:11:48 -- common/autotest_common.sh@965 -- # kill 491769 00:25:51.601 Received shutdown signal, test time was about 2.000000 seconds 00:25:51.601 00:25:51.601 Latency(us) 00:25:51.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.601 =================================================================================================================== 00:25:51.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.601 11:11:48 -- common/autotest_common.sh@970 -- # wait 491769 00:25:51.863 11:11:48 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:51.863 11:11:48 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:51.863 11:11:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:51.863 11:11:48 -- host/digest.sh@80 -- # rw=randwrite 00:25:51.863 11:11:48 -- host/digest.sh@80 -- # bs=131072 00:25:51.863 11:11:48 -- host/digest.sh@80 -- # qd=16 00:25:51.863 11:11:48 -- host/digest.sh@80 -- # scan_dsa=false 00:25:51.863 11:11:48 -- host/digest.sh@83 -- # bperfpid=492462 00:25:51.863 11:11:48 -- host/digest.sh@84 -- # waitforlisten 492462 /var/tmp/bperf.sock 00:25:51.863 11:11:48 -- common/autotest_common.sh@827 -- # '[' -z 492462 ']' 00:25:51.863 11:11:48 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:51.863 11:11:48 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:51.863 11:11:48 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:51.863 11:11:48 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:51.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:51.863 11:11:48 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:51.863 11:11:48 -- common/autotest_common.sh@10 -- # set +x 00:25:51.863 [2024-05-15 11:11:48.319695] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:25:51.863 [2024-05-15 11:11:48.319751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492462 ] 00:25:51.863 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:51.863 Zero copy mechanism will not be used. 00:25:51.863 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.863 [2024-05-15 11:11:48.394324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.863 [2024-05-15 11:11:48.447318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.435 11:11:49 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:52.435 11:11:49 -- common/autotest_common.sh@860 -- # return 0 00:25:52.435 11:11:49 -- host/digest.sh@86 -- # false 00:25:52.435 11:11:49 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:52.435 11:11:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:52.695 11:11:49 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.695 11:11:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.956 nvme0n1 00:25:52.956 11:11:49 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:52.956 11:11:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.956 Zero copy mechanism will not be used. 00:25:52.956 Running I/O for 2 seconds... 00:25:55.501 00:25:55.501 Latency(us) 00:25:55.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.501 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:55.501 nvme0n1 : 2.00 4335.24 541.90 0.00 0.00 3684.63 1576.96 7372.80 00:25:55.501 =================================================================================================================== 00:25:55.501 Total : 4335.24 541.90 0.00 0.00 3684.63 1576.96 7372.80 00:25:55.501 0 00:25:55.501 11:11:51 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:55.501 11:11:51 -- host/digest.sh@93 -- # get_accel_stats 00:25:55.501 11:11:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:55.501 11:11:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:55.501 | select(.opcode=="crc32c") 00:25:55.501 | "\(.module_name) \(.executed)"' 00:25:55.501 11:11:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:55.501 11:11:51 -- host/digest.sh@94 -- # false 00:25:55.501 11:11:51 -- host/digest.sh@94 -- # exp_module=software 00:25:55.501 11:11:51 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:55.501 11:11:51 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:55.501 11:11:51 -- host/digest.sh@98 -- # killprocess 492462 00:25:55.501 11:11:51 -- common/autotest_common.sh@946 -- # '[' -z 492462 ']' 00:25:55.501 11:11:51 -- common/autotest_common.sh@950 -- # kill -0 492462 00:25:55.501 11:11:51 -- common/autotest_common.sh@951 -- # uname 00:25:55.501 11:11:51 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:55.501 11:11:51 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 492462 00:25:55.501 11:11:51 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:55.501 11:11:51 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:55.501 11:11:51 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 492462' 00:25:55.501 killing process with pid 492462 00:25:55.501 11:11:51 -- common/autotest_common.sh@965 -- # kill 492462 00:25:55.501 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.501 00:25:55.501 Latency(us) 00:25:55.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.501 =================================================================================================================== 00:25:55.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.501 11:11:51 -- common/autotest_common.sh@970 -- # wait 492462 00:25:55.501 11:11:51 -- host/digest.sh@132 -- # killprocess 490057 00:25:55.501 11:11:51 -- common/autotest_common.sh@946 -- # '[' -z 490057 ']' 00:25:55.501 11:11:51 -- common/autotest_common.sh@950 -- # kill -0 490057 00:25:55.501 11:11:51 -- common/autotest_common.sh@951 -- # uname 00:25:55.501 11:11:51 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:55.502 11:11:51 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 490057 00:25:55.502 11:11:51 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:55.502 11:11:51 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:55.502 11:11:51 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 490057' 00:25:55.502 killing process with pid 490057 00:25:55.502 11:11:51 -- common/autotest_common.sh@965 -- # kill 490057 00:25:55.502 [2024-05-15 11:11:51.977820] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:55.502 11:11:51 -- common/autotest_common.sh@970 -- # wait 490057 00:25:55.502 00:25:55.502 real 0m16.066s 00:25:55.502 user 0m31.627s 00:25:55.502 sys 0m3.347s 00:25:55.502 11:11:52 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:55.502 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.502 ************************************ 00:25:55.502 END TEST nvmf_digest_clean 00:25:55.502 ************************************ 00:25:55.502 11:11:52 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:55.502 11:11:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:55.502 11:11:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:55.502 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 ************************************ 00:25:55.762 START TEST nvmf_digest_error 00:25:55.762 ************************************ 00:25:55.762 11:11:52 -- common/autotest_common.sh@1121 -- # run_digest_error 00:25:55.762 11:11:52 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:55.762 11:11:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:55.762 11:11:52 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:55.762 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 11:11:52 -- nvmf/common.sh@470 -- # nvmfpid=493366 00:25:55.762 11:11:52 -- nvmf/common.sh@471 -- # waitforlisten 493366 00:25:55.762 11:11:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:55.762 11:11:52 -- common/autotest_common.sh@827 -- # '[' -z 493366 ']' 00:25:55.762 11:11:52 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.762 11:11:52 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.762 11:11:52 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.762 11:11:52 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.762 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 [2024-05-15 11:11:52.250233] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:25:55.762 [2024-05-15 11:11:52.250286] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.762 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.762 [2024-05-15 11:11:52.317748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.762 [2024-05-15 11:11:52.391314] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.762 [2024-05-15 11:11:52.391352] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.762 [2024-05-15 11:11:52.391360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.762 [2024-05-15 11:11:52.391366] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.762 [2024-05-15 11:11:52.391372] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.762 [2024-05-15 11:11:52.391391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.702 11:11:53 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:56.702 11:11:53 -- common/autotest_common.sh@860 -- # return 0 00:25:56.702 11:11:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:56.702 11:11:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.702 11:11:53 -- common/autotest_common.sh@10 -- # set +x 00:25:56.702 11:11:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.702 11:11:53 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:56.702 11:11:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.702 11:11:53 -- common/autotest_common.sh@10 -- # set +x 00:25:56.702 [2024-05-15 11:11:53.057325] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:56.702 11:11:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.702 11:11:53 -- host/digest.sh@105 -- # common_target_config 00:25:56.702 11:11:53 -- host/digest.sh@43 -- # rpc_cmd 00:25:56.702 11:11:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.702 11:11:53 -- common/autotest_common.sh@10 -- # set +x 00:25:56.702 null0 00:25:56.702 [2024-05-15 11:11:53.137821] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.702 [2024-05-15 11:11:53.161819] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:56.702 [2024-05-15 11:11:53.162024] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.702 11:11:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.702 11:11:53 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:56.702 11:11:53 -- host/digest.sh@54 -- # local rw bs qd 00:25:56.702 11:11:53 -- host/digest.sh@56 -- # rw=randread 00:25:56.702 11:11:53 -- host/digest.sh@56 -- # bs=4096 00:25:56.702 11:11:53 -- host/digest.sh@56 -- # qd=128 00:25:56.702 11:11:53 -- host/digest.sh@58 -- # bperfpid=493515 00:25:56.702 11:11:53 -- host/digest.sh@60 -- # waitforlisten 493515 /var/tmp/bperf.sock 00:25:56.702 11:11:53 -- common/autotest_common.sh@827 -- # '[' -z 493515 ']' 00:25:56.702 11:11:53 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:56.702 11:11:53 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.702 11:11:53 -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:56.702 11:11:53 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.702 11:11:53 -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:56.702 11:11:53 -- common/autotest_common.sh@10 -- # set +x 00:25:56.702 [2024-05-15 11:11:53.224564] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:25:56.702 [2024-05-15 11:11:53.224625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493515 ] 00:25:56.702 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.702 [2024-05-15 11:11:53.297980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.702 [2024-05-15 11:11:53.351228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.643 11:11:53 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:57.643 11:11:53 -- common/autotest_common.sh@860 -- # return 0 00:25:57.643 11:11:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.643 11:11:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.643 11:11:54 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:57.643 11:11:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.643 11:11:54 -- common/autotest_common.sh@10 -- # set +x 00:25:57.643 11:11:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.643 11:11:54 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.643 11:11:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.903 nvme0n1 00:25:57.903 11:11:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:57.903 11:11:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.903 11:11:54 -- common/autotest_common.sh@10 -- # set +x 00:25:57.903 11:11:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.903 11:11:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:57.903 11:11:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.903 Running I/O for 2 seconds... 00:25:57.903 [2024-05-15 11:11:54.527073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:57.903 [2024-05-15 11:11:54.527101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-05-15 11:11:54.527110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.903 [2024-05-15 11:11:54.541761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:57.903 [2024-05-15 11:11:54.541780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-05-15 11:11:54.541786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.903 [2024-05-15 11:11:54.555287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:57.903 [2024-05-15 11:11:54.555304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.903 [2024-05-15 11:11:54.555311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.165 [2024-05-15 11:11:54.567013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.165 [2024-05-15 11:11:54.567030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.165 [2024-05-15 11:11:54.567037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.165 [2024-05-15 11:11:54.579004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.165 [2024-05-15 11:11:54.579022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.165 [2024-05-15 11:11:54.579028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.165 [2024-05-15 11:11:54.591653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.165 [2024-05-15 11:11:54.591670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.165 [2024-05-15 11:11:54.591676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.165 [2024-05-15 11:11:54.603390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.165 [2024-05-15 11:11:54.603407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.165 [2024-05-15 11:11:54.603414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.165 [2024-05-15 11:11:54.616125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.165 [2024-05-15 11:11:54.616142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.165 [2024-05-15 11:11:54.616149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.165 [2024-05-15 11:11:54.627097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.165 [2024-05-15 11:11:54.627113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.627120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.640564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.640580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.640587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.652536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.652556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.652563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.664975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.664991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.664998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.675909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.675925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.675935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.688915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.688931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.688938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.700764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.700780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.700786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.712039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.712055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.712062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.724576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.724592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.724598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.736487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.736504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.736510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.748585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.748602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.748608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.760466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.760483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.770743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.770760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.770766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.782394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.782414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.782420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.795456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.795472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.795478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.166 [2024-05-15 11:11:54.808182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.166 [2024-05-15 11:11:54.808199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.166 [2024-05-15 11:11:54.808206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.822074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.822090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.822097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.833482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.833498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.833505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.844588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.844606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.844613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.855924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.855940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.855947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.868612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.868629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.868635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.881692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.881708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.881715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.894303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.894320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.894326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.905507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.905524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.905530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.918582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.918598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.918604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.931417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.931434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.931440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.944074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.944091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.944097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.955892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.955908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.955915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.967613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.967630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.967636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.977768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.977784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.977790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:54.991930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:54.991946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:54.991955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:55.002203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:55.002219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:55.002225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:55.014420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:55.014436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:55.014443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:55.027003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:55.027019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:55.027025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:55.039998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:55.040014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:55.040020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:55.052556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:55.052572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:55.052578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:55.064926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:55.064942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:55.064948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.427 [2024-05-15 11:11:55.076978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.427 [2024-05-15 11:11:55.076995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.427 [2024-05-15 11:11:55.077001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.089045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.089062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.089069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.101198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.101219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.101226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.113000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.113017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.113023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.124728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.124745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.124751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.137461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.137477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.137483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.148558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.148574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.148581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.161157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.161173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.161179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.173198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.173214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.173220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.184990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.185008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.185014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 11:11:55.196507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.688 [2024-05-15 11:11:55.196524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 11:11:55.196530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.208843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.208860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.208866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.219639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.219656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.219662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.231606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.231629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.231636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.244264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.244281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.257553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.257570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.257576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.269229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.269246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.269252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.281689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.281705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.281711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.292904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.292920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.292926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.303481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.303498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.303507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.316606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.316623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.316629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.328950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.328967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.328973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 11:11:55.340235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.689 [2024-05-15 11:11:55.340251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 11:11:55.340258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.354324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.354340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.354347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.365474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.365490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.365497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.377394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.377411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.377418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.388419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.388436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.388443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.400430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.400446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.400453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.413078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.413095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.413101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.424850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.424867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.424874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.437402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.437418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.437424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.447604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.447621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.447627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.460783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.460800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.460807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.473029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.473046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.473053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.483199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.483216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.483222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.496732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.496748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.496754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.508969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.508986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.508995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.520641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.520657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.520664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.532251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.532268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.532274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.543767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.543783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.543790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.556206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.556223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.556229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.568486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.568503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.568510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.580850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.580866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.950 [2024-05-15 11:11:55.580873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.950 [2024-05-15 11:11:55.592394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:58.950 [2024-05-15 11:11:55.592411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.951 [2024-05-15 11:11:55.592417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.213 [2024-05-15 11:11:55.604785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.213 [2024-05-15 11:11:55.604801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.213 [2024-05-15 11:11:55.604808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.213 [2024-05-15 11:11:55.616349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.213 [2024-05-15 11:11:55.616369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.213 [2024-05-15 11:11:55.616375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.213 [2024-05-15 11:11:55.629488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.213 [2024-05-15 11:11:55.629505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.213 [2024-05-15 11:11:55.629511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.213 [2024-05-15 11:11:55.641135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.213 [2024-05-15 11:11:55.641153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.213 [2024-05-15 11:11:55.641160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.213 [2024-05-15 11:11:55.653718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.213 [2024-05-15 11:11:55.653735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.213 [2024-05-15 11:11:55.653741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.213 [2024-05-15 11:11:55.665986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.213 [2024-05-15 11:11:55.666002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.666008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.678196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.678213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.678220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.689895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.689912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.689918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.700792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.700809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.700815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.713632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.713649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.713656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.724959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.724975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.724982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.736364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.736381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.736387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.749099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.749116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.749122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.762913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.762931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.762937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.774484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.774501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.774507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.785876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.785893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.785900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.798257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.798274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.798280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.809954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.809970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.809977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.821976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.821993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.822002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.832385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.832402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.832408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.844935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.844953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.844960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.214 [2024-05-15 11:11:55.857964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.214 [2024-05-15 11:11:55.857980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.214 [2024-05-15 11:11:55.857987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.475 [2024-05-15 11:11:55.869387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.475 [2024-05-15 11:11:55.869404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.475 [2024-05-15 11:11:55.869411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.475 [2024-05-15 11:11:55.881603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.475 [2024-05-15 11:11:55.881619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.475 [2024-05-15 11:11:55.881626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.475 [2024-05-15 11:11:55.894012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.475 [2024-05-15 11:11:55.894029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.475 [2024-05-15 11:11:55.894036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.475 [2024-05-15 11:11:55.902911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.475 [2024-05-15 11:11:55.902928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.475 [2024-05-15 11:11:55.902935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.475 [2024-05-15 11:11:55.918842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.475 [2024-05-15 11:11:55.918860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:55.918868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:55.928739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:55.928759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:55.928765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:55.942263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:55.942279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:55.942286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:55.953474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:55.953491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:55.953497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:55.966221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:55.966238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:55.966244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:55.979259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:55.979275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:55.979282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:55.989528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:55.989546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:55.989553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.001834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.001851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.001858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.013309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.013325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.013332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.025461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.025477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.025484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.037745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.037761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.037768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.050456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.050472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.050479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.061811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.061828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.061834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.074212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.074229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.074236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.086770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.086786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.086793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.100330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.100346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.100353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.113759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.113775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.113782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.476 [2024-05-15 11:11:56.123794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.476 [2024-05-15 11:11:56.123810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.476 [2024-05-15 11:11:56.123816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.136166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.136186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.136193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.149081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.149097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.149103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.160665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.160682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.160688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.172805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.172822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.172828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.184240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.184256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.184262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.195438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.195454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.195460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.208290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.208306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.208312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.222852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.222869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.222875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.232427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.232444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.232450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.245849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.245866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.245872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.257995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.737 [2024-05-15 11:11:56.258011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.737 [2024-05-15 11:11:56.258017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.737 [2024-05-15 11:11:56.270130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.270146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.270152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.281782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.281798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.281804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.293477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.293494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.293500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.305783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.305800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.305806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.317267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.317283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.317290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.329252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.329268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.329274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.342192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.342208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.342217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.354045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.354061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.354067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.364645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.364661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.364667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.377857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.377873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.377880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.738 [2024-05-15 11:11:56.388903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.738 [2024-05-15 11:11:56.388920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-05-15 11:11:56.388927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.400874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.400891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.400897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.414445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.414461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.414468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.426514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.426530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.426537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.437419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.437441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.449177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.449199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.449205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.461961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.461977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.461983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.473943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.473959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.473966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.486340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.486357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.486363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.497688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.497704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.497711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 [2024-05-15 11:11:56.510098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa85ad0) 00:25:59.998 [2024-05-15 11:11:56.510115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.998 [2024-05-15 11:11:56.510121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.998 00:25:59.998 Latency(us) 00:25:59.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.998 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:59.998 nvme0n1 : 2.04 20641.30 80.63 0.00 0.00 6073.44 2266.45 48715.09 00:25:59.998 =================================================================================================================== 00:25:59.998 Total : 20641.30 80.63 0.00 0.00 6073.44 2266.45 48715.09 00:25:59.998 0 00:25:59.998 11:11:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:59.998 11:11:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:59.998 11:11:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:59.998 | .driver_specific 00:25:59.998 | .nvme_error 00:25:59.998 | .status_code 00:25:59.998 | .command_transient_transport_error' 00:25:59.998 11:11:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:00.258 11:11:56 -- host/digest.sh@71 -- # (( 165 > 0 )) 00:26:00.258 11:11:56 -- host/digest.sh@73 -- # killprocess 493515 00:26:00.258 11:11:56 -- common/autotest_common.sh@946 -- # '[' -z 493515 ']' 00:26:00.258 11:11:56 -- common/autotest_common.sh@950 -- # kill -0 493515 00:26:00.258 11:11:56 -- common/autotest_common.sh@951 -- # uname 00:26:00.258 11:11:56 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:00.258 11:11:56 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 493515 00:26:00.258 11:11:56 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:00.258 11:11:56 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:00.258 11:11:56 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 493515' 00:26:00.258 killing process with pid 493515 00:26:00.258 11:11:56 -- common/autotest_common.sh@965 -- # kill 493515 00:26:00.259 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.259 00:26:00.259 Latency(us) 00:26:00.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.259 =================================================================================================================== 00:26:00.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.259 11:11:56 -- common/autotest_common.sh@970 -- # wait 493515 00:26:00.259 11:11:56 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:00.259 11:11:56 -- host/digest.sh@54 -- # local rw bs qd 00:26:00.259 11:11:56 -- host/digest.sh@56 -- # rw=randread 00:26:00.259 11:11:56 -- host/digest.sh@56 -- # bs=131072 00:26:00.259 11:11:56 -- host/digest.sh@56 -- # qd=16 00:26:00.259 11:11:56 -- host/digest.sh@58 -- # bperfpid=494204 00:26:00.259 11:11:56 -- host/digest.sh@60 -- # waitforlisten 494204 /var/tmp/bperf.sock 00:26:00.259 11:11:56 -- common/autotest_common.sh@827 -- # '[' -z 494204 ']' 00:26:00.259 11:11:56 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:00.259 11:11:56 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.259 11:11:56 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:00.259 11:11:56 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.259 11:11:56 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:00.259 11:11:56 -- common/autotest_common.sh@10 -- # set +x 00:26:00.519 [2024-05-15 11:11:56.950469] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:00.519 [2024-05-15 11:11:56.950524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494204 ] 00:26:00.519 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:00.519 Zero copy mechanism will not be used. 00:26:00.519 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.519 [2024-05-15 11:11:57.025494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.519 [2024-05-15 11:11:57.079118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.092 11:11:57 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:01.092 11:11:57 -- common/autotest_common.sh@860 -- # return 0 00:26:01.092 11:11:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:01.092 11:11:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:01.350 11:11:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:01.350 11:11:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.350 11:11:57 -- common/autotest_common.sh@10 -- # set +x 00:26:01.351 11:11:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.351 11:11:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.351 11:11:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.610 nvme0n1 00:26:01.610 11:11:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:01.610 11:11:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.610 11:11:58 -- common/autotest_common.sh@10 -- # set +x 00:26:01.610 11:11:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.610 11:11:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:01.610 11:11:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:01.610 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:01.610 Zero copy mechanism will not be used. 00:26:01.610 Running I/O for 2 seconds... 00:26:01.610 [2024-05-15 11:11:58.234000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.610 [2024-05-15 11:11:58.234030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.610 [2024-05-15 11:11:58.234038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.610 [2024-05-15 11:11:58.243291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.610 [2024-05-15 11:11:58.243311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.610 [2024-05-15 11:11:58.243318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.610 [2024-05-15 11:11:58.251460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.610 [2024-05-15 11:11:58.251478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.610 [2024-05-15 11:11:58.251484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.610 [2024-05-15 11:11:58.261702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.610 [2024-05-15 11:11:58.261719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.610 [2024-05-15 11:11:58.261726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.871 [2024-05-15 11:11:58.272019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.871 [2024-05-15 11:11:58.272036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.871 [2024-05-15 11:11:58.272042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.281785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.281801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.281808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.292844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.292861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.292867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.299931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.299948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.299959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.310338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.310355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.310361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.320070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.320087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.320093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.329928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.329945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.329952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.337968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.337985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.337991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.349238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.349254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.349260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.361006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.361023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.361029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.369097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.369114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.369120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.374704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.374721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.374727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.384790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.384810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.384816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.394662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.394678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.394684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.406254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.406271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.406277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.414923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.414940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.414946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.424060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.424076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.424082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.430682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.430698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.430704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.436752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.436769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.436775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.872 [2024-05-15 11:11:58.445339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.872 [2024-05-15 11:11:58.445355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.872 [2024-05-15 11:11:58.445362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.457267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.457284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.457293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.460925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.460942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.460949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.466363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.466380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.466387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.474222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.474240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.474246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.485809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.485826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.485833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.495647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.495665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.495672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.501790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.501808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.501815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.512332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:01.873 [2024-05-15 11:11:58.512350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.873 [2024-05-15 11:11:58.512356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.873 [2024-05-15 11:11:58.523652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.523669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.523676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.533001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.533022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.533028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.542674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.542692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.542698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.552341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.552359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.552365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.559894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.559911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.559918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.570666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.570683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.570689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.576159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.576177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.576183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.581964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.581981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.581988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.586656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.586673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.586679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.589331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.589348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.589354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.598767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.598784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.598790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.610107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.610124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.610130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.619643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.619660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.619666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.629705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.629722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.629728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.640598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.640615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.640621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.650736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.650759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.660819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.135 [2024-05-15 11:11:58.660837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.135 [2024-05-15 11:11:58.660843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.135 [2024-05-15 11:11:58.668889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.668905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.668912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.678203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.678220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.678230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.683244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.683261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.683268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.688026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.688044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.688050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.695899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.695916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.695923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.702698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.702716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.702724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.712194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.712212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.712218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.721414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.721432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.721440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.729603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.729620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.729626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.737194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.737211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.737217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.748360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.748380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.748386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.753791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.753808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.753814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.762874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.762891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.762898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.773889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.773906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.773912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.136 [2024-05-15 11:11:58.786024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.136 [2024-05-15 11:11:58.786041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.136 [2024-05-15 11:11:58.786047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.797470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.797487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.797493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.809704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.809721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.809728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.818916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.818933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.818940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.830359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.830377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.830384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.842950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.842968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.842974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.855639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.855656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.855662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.860932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.860949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.860956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.865972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.865990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.865997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.872407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.872423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.872430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.877584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.877601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.877607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.884448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.884466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.884472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.889495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.889512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.889518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.901105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.901123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.901132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.910860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.910878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.910885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.918511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.918529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.918535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.925758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.925776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.925783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.935542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.935564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.935570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.945892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.945910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.945917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.956161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.956178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.956184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.962046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.962064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.962070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.970489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.970506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.970512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.975432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.975453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.975460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.399 [2024-05-15 11:11:58.980555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.399 [2024-05-15 11:11:58.980572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-05-15 11:11:58.980579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:58.986919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:58.986936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:58.986943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:58.997033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:58.997051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:58.997057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:59.003356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:59.003374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:59.003381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:59.014355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:59.014373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:59.014379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:59.023339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:59.023356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:59.023363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:59.029922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:59.029939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:59.029945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:59.033886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:59.033903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:59.033912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:59.041983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:59.042000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:59.042007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.400 [2024-05-15 11:11:59.046073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.400 [2024-05-15 11:11:59.046090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.400 [2024-05-15 11:11:59.046096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.056465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.056484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.056490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.064613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.064631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.064637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.073393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.073411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.073417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.081857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.081874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.081881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.092238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.092257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.092263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.098229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.098247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.098254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.105806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.105827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.105833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.115985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.116003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.116009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.122107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.122125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.122132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.130997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.131015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.131021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.138157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.138174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.138180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.149654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.149672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.149678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.160391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.160409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.160415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.167260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.167277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.167283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.172500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.172518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.172525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.177215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.177233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.177240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.185690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.185708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.185714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.193378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.193396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.193402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.203099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.203117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.203123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.210207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.210224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.210231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.216692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.216716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.224035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.224053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.224059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.230703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.230720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.662 [2024-05-15 11:11:59.230726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.662 [2024-05-15 11:11:59.240392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.662 [2024-05-15 11:11:59.240410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.240422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.663 [2024-05-15 11:11:59.250004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.663 [2024-05-15 11:11:59.250022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.250028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.663 [2024-05-15 11:11:59.254745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.663 [2024-05-15 11:11:59.254763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.254770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.663 [2024-05-15 11:11:59.266227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.663 [2024-05-15 11:11:59.266245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.266251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.663 [2024-05-15 11:11:59.275070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.663 [2024-05-15 11:11:59.275088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.275094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.663 [2024-05-15 11:11:59.286346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.663 [2024-05-15 11:11:59.286364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.286370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.663 [2024-05-15 11:11:59.295181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.663 [2024-05-15 11:11:59.295198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.295204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.663 [2024-05-15 11:11:59.304793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.663 [2024-05-15 11:11:59.304810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.663 [2024-05-15 11:11:59.304816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.314760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.314778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.314784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.324801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.324822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.324828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.333447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.333465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.333471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.338732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.338749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.338756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.347884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.347902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.347908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.356465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.356483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.356489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.366355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.366376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.366383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.371416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.371434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.371440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.379757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.379774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.379780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.389584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.925 [2024-05-15 11:11:59.389602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.925 [2024-05-15 11:11:59.389608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.925 [2024-05-15 11:11:59.395809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.395826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.395833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.405060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.405077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.405083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.415038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.415054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.415061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.423917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.423934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.423940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.428677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.428693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.428699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.437626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.437643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.437649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.443005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.443023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.443029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.450555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.450574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.450580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.461623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.461640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.461650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.471806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.471823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.471829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.479455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.479477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.488245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.488262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.488269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.498858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.498876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.498882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.509804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.509822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.509828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.520994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.521012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.521018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.532758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.532776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.532782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.542651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.542668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.542674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.553069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.553086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.553092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.562876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.562894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.562900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.568913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.568930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.568936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.926 [2024-05-15 11:11:59.574202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:02.926 [2024-05-15 11:11:59.574220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.926 [2024-05-15 11:11:59.574226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.579605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.579623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.579629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.588739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.588757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.595645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.595661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.595668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.600735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.600752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.600758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.610032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.610049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.610058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.614552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.614569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.614575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.622193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.622210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.622216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.628362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.628378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.187 [2024-05-15 11:11:59.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.187 [2024-05-15 11:11:59.636152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.187 [2024-05-15 11:11:59.636169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.636175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.645012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.645029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.645035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.652108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.652125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.652131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.657279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.657296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.657302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.665300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.665318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.665324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.674067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.674087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.674093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.684048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.684066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.684072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.695822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.695840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.695846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.707781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.707798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.707804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.720178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.720195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.720201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.732621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.732639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.732645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.741681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.741699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.741705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.748683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.748700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.748706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.755349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.755368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.755374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.761821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.761838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.761844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.771664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.771682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.771688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.781221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.781237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.781243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.790226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.790243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.790249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.799161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.799178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.799184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.807503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.807520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.807527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.817857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.817874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.817880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.827073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.827090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.827096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.833308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.833325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.833334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.188 [2024-05-15 11:11:59.838336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.188 [2024-05-15 11:11:59.838353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.188 [2024-05-15 11:11:59.838359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.450 [2024-05-15 11:11:59.843409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.450 [2024-05-15 11:11:59.843427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.450 [2024-05-15 11:11:59.843433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.851301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.851318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.851324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.856179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.856196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.856202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.862445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.862461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.862467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.867936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.867954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.867960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.874874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.874891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.874897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.882201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.882218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.882224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.887128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.887148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.887155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.892694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.892711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.892717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.902917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.902935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.902941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.907615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.907632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.907638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.917812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.917828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.917835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.927870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.927887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.927894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.939050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.939067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.939074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.945454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.945470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.945477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.954736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.954754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.954760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.962369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.962386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.962392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.969624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.969641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.969647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.974749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.974766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.974772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.983715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.983732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.983739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.993702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.993719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.993726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:11:59.998920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:11:59.998937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:11:59.998943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:12:00.008636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:12:00.008656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:12:00.008663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:12:00.018251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:12:00.018268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:12:00.018275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:12:00.023296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:12:00.023314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:12:00.023323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:12:00.028415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:12:00.028433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:12:00.028439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:12:00.033466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:12:00.033483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:12:00.033490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:12:00.038230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:12:00.038247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.451 [2024-05-15 11:12:00.038254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.451 [2024-05-15 11:12:00.043281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.451 [2024-05-15 11:12:00.043298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.043304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.048254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.048271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.048277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.053449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.053473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.053486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.061570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.061590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.061598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.068786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.068805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.068811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.073782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.073799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.073806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.078700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.078716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.078722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.083649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.083666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.083672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.088605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.088622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.088629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.093531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.093553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.093560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.452 [2024-05-15 11:12:00.098474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.452 [2024-05-15 11:12:00.098491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.452 [2024-05-15 11:12:00.098497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.103542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.103565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.103572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.112371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.112388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.112394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.124354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.124372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.124381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.135276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.135294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.135300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.144895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.144912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.144919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.151269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.151286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.151293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.159535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.159556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.159563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.166668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.166685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.166691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.178073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.713 [2024-05-15 11:12:00.178090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.713 [2024-05-15 11:12:00.178096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.713 [2024-05-15 11:12:00.185859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.714 [2024-05-15 11:12:00.185876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.714 [2024-05-15 11:12:00.185882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.714 [2024-05-15 11:12:00.195515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.714 [2024-05-15 11:12:00.195532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.714 [2024-05-15 11:12:00.195538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.714 [2024-05-15 11:12:00.206016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.714 [2024-05-15 11:12:00.206038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.714 [2024-05-15 11:12:00.206044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.714 [2024-05-15 11:12:00.213834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.714 [2024-05-15 11:12:00.213852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.714 [2024-05-15 11:12:00.213859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.714 [2024-05-15 11:12:00.223179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f4420) 00:26:03.714 [2024-05-15 11:12:00.223195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.714 [2024-05-15 11:12:00.223202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.714 00:26:03.714 Latency(us) 00:26:03.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.714 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:03.714 nvme0n1 : 2.00 3766.97 470.87 0.00 0.00 4243.64 488.11 14308.69 00:26:03.714 =================================================================================================================== 00:26:03.714 Total : 3766.97 470.87 0.00 0.00 4243.64 488.11 14308.69 00:26:03.714 0 00:26:03.714 11:12:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:03.714 11:12:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:03.714 11:12:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:03.714 | .driver_specific 00:26:03.714 | .nvme_error 00:26:03.714 | .status_code 00:26:03.714 | .command_transient_transport_error' 00:26:03.714 11:12:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:03.974 11:12:00 -- host/digest.sh@71 -- # (( 243 > 0 )) 00:26:03.974 11:12:00 -- host/digest.sh@73 -- # killprocess 494204 00:26:03.974 11:12:00 -- common/autotest_common.sh@946 -- # '[' -z 494204 ']' 00:26:03.974 11:12:00 -- common/autotest_common.sh@950 -- # kill -0 494204 00:26:03.974 11:12:00 -- common/autotest_common.sh@951 -- # uname 00:26:03.974 11:12:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:03.974 11:12:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 494204 00:26:03.974 11:12:00 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:03.974 11:12:00 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:03.974 11:12:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 494204' 00:26:03.974 killing process with pid 494204 00:26:03.974 11:12:00 -- common/autotest_common.sh@965 -- # kill 494204 00:26:03.974 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.974 00:26:03.974 Latency(us) 00:26:03.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.974 =================================================================================================================== 00:26:03.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.974 11:12:00 -- common/autotest_common.sh@970 -- # wait 494204 00:26:03.974 11:12:00 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:03.974 11:12:00 -- host/digest.sh@54 -- # local rw bs qd 00:26:03.974 11:12:00 -- host/digest.sh@56 -- # rw=randwrite 00:26:03.975 11:12:00 -- host/digest.sh@56 -- # bs=4096 00:26:03.975 11:12:00 -- host/digest.sh@56 -- # qd=128 00:26:03.975 11:12:00 -- host/digest.sh@58 -- # bperfpid=494982 00:26:03.975 11:12:00 -- host/digest.sh@60 -- # waitforlisten 494982 /var/tmp/bperf.sock 00:26:03.975 11:12:00 -- common/autotest_common.sh@827 -- # '[' -z 494982 ']' 00:26:03.975 11:12:00 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:03.975 11:12:00 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.975 11:12:00 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:03.975 11:12:00 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.975 11:12:00 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:03.975 11:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:03.975 [2024-05-15 11:12:00.621362] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:03.975 [2024-05-15 11:12:00.621416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494982 ] 00:26:04.235 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.235 [2024-05-15 11:12:00.695935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.235 [2024-05-15 11:12:00.748604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.806 11:12:01 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:04.806 11:12:01 -- common/autotest_common.sh@860 -- # return 0 00:26:04.806 11:12:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.806 11:12:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:05.066 11:12:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:05.066 11:12:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.066 11:12:01 -- common/autotest_common.sh@10 -- # set +x 00:26:05.066 11:12:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.066 11:12:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.066 11:12:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.327 nvme0n1 00:26:05.327 11:12:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:05.327 11:12:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.327 11:12:01 -- common/autotest_common.sh@10 -- # set +x 00:26:05.327 11:12:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.327 11:12:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:05.327 11:12:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.587 Running I/O for 2 seconds... 00:26:05.587 [2024-05-15 11:12:02.049486] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f9f68 00:26:05.587 [2024-05-15 11:12:02.050352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.587 [2024-05-15 11:12:02.050381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.587 [2024-05-15 11:12:02.063795] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e6738 00:26:05.587 [2024-05-15 11:12:02.065549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.587 [2024-05-15 11:12:02.065568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:05.587 [2024-05-15 11:12:02.075170] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fb048 00:26:05.587 [2024-05-15 11:12:02.076907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.587 [2024-05-15 11:12:02.076926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:05.587 [2024-05-15 11:12:02.086542] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fa7d8 00:26:05.587 [2024-05-15 11:12:02.088246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.587 [2024-05-15 11:12:02.088262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:05.587 [2024-05-15 11:12:02.095925] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e6b70 00:26:05.587 [2024-05-15 11:12:02.097012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.097030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.107256] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e73e0 00:26:05.588 [2024-05-15 11:12:02.108316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.108332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.121513] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f5378 00:26:05.588 [2024-05-15 11:12:02.123357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.123374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.131469] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fda78 00:26:05.588 [2024-05-15 11:12:02.132650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.132666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.142107] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fe720 00:26:05.588 [2024-05-15 11:12:02.143300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.143318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.156326] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fef90 00:26:05.588 [2024-05-15 11:12:02.158324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.158340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.166706] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f35f0 00:26:05.588 [2024-05-15 11:12:02.168194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.168212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.176124] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f2d80 00:26:05.588 [2024-05-15 11:12:02.177018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.177035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.186820] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e6b70 00:26:05.588 [2024-05-15 11:12:02.187700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.187716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.198166] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e6300 00:26:05.588 [2024-05-15 11:12:02.199009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.199025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.209529] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ec408 00:26:05.588 [2024-05-15 11:12:02.210361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.210376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.221699] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190de038 00:26:05.588 [2024-05-15 11:12:02.222521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.222537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:05.588 [2024-05-15 11:12:02.233182] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f2510 00:26:05.588 [2024-05-15 11:12:02.233966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.588 [2024-05-15 11:12:02.233982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.244054] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ed920 00:26:05.849 [2024-05-15 11:12:02.244873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.244890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.258141] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ed920 00:26:05.849 [2024-05-15 11:12:02.259701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.259717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.269479] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ee190 00:26:05.849 [2024-05-15 11:12:02.271061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.271077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.279403] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190eea00 00:26:05.849 [2024-05-15 11:12:02.280316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.280333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.290851] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190efae0 00:26:05.849 [2024-05-15 11:12:02.291760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.291777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.302309] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fac10 00:26:05.849 [2024-05-15 11:12:02.303263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.303280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.313778] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f9b30 00:26:05.849 [2024-05-15 11:12:02.314722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.314738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.326613] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f0788 00:26:05.849 [2024-05-15 11:12:02.328258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.328274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.337069] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:05.849 [2024-05-15 11:12:02.338165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.338181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.350142] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ebb98 00:26:05.849 [2024-05-15 11:12:02.351861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.351878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.360511] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f4298 00:26:05.849 [2024-05-15 11:12:02.361768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.361785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.373619] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e8d30 00:26:05.849 [2024-05-15 11:12:02.375479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.375501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.383495] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e95a0 00:26:05.849 [2024-05-15 11:12:02.384715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.384731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.394935] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e95a0 00:26:05.849 [2024-05-15 11:12:02.396163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.396180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.406376] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e95a0 00:26:05.849 [2024-05-15 11:12:02.407626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.407642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.417837] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e95a0 00:26:05.849 [2024-05-15 11:12:02.419065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.419082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.429299] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e95a0 00:26:05.849 [2024-05-15 11:12:02.430405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.430421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.439925] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e9e10 00:26:05.849 [2024-05-15 11:12:02.441083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.441100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.451262] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ea680 00:26:05.849 [2024-05-15 11:12:02.452441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.452457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.462631] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190eaef0 00:26:05.849 [2024-05-15 11:12:02.463773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.463790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.476266] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e6b70 00:26:05.849 [2024-05-15 11:12:02.478061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.478078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.486621] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f9f68 00:26:05.849 [2024-05-15 11:12:02.487948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.849 [2024-05-15 11:12:02.487965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:05.849 [2024-05-15 11:12:02.499719] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f7da8 00:26:06.110 [2024-05-15 11:12:02.501620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.501637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.110 [2024-05-15 11:12:02.509639] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f7538 00:26:06.110 [2024-05-15 11:12:02.510876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.510891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.110 [2024-05-15 11:12:02.521078] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fb048 00:26:06.110 [2024-05-15 11:12:02.522349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.522365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.110 [2024-05-15 11:12:02.531720] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fa7d8 00:26:06.110 [2024-05-15 11:12:02.533008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.533025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.110 [2024-05-15 11:12:02.543059] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f9f68 00:26:06.110 [2024-05-15 11:12:02.544329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.544346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.110 [2024-05-15 11:12:02.554401] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f96f8 00:26:06.110 [2024-05-15 11:12:02.555658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.555674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.110 [2024-05-15 11:12:02.568264] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e6300 00:26:06.110 [2024-05-15 11:12:02.570104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.570120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.110 [2024-05-15 11:12:02.577835] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ecc78 00:26:06.110 [2024-05-15 11:12:02.579235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-05-15 11:12:02.579252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.590017] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ecc78 00:26:06.111 [2024-05-15 11:12:02.591388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.591405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.600638] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ed4e8 00:26:06.111 [2024-05-15 11:12:02.602028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.602044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.612766] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e5658 00:26:06.111 [2024-05-15 11:12:02.614143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.614159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.624310] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e4140 00:26:06.111 [2024-05-15 11:12:02.625709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.625726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.637218] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e3060 00:26:06.111 [2024-05-15 11:12:02.639219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.639236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.647844] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f0bc0 00:26:06.111 [2024-05-15 11:12:02.649500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.649518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.657089] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e0ea0 00:26:06.111 [2024-05-15 11:12:02.658108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.658124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.668820] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f31b8 00:26:06.111 [2024-05-15 11:12:02.669800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.669819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.679484] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e84c0 00:26:06.111 [2024-05-15 11:12:02.680488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.680506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.691638] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.111 [2024-05-15 11:12:02.692635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.692651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.703064] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.111 [2024-05-15 11:12:02.704054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.704070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.714501] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.111 [2024-05-15 11:12:02.715501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.715518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.725945] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.111 [2024-05-15 11:12:02.726944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.726961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.737380] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.111 [2024-05-15 11:12:02.738379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.748826] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.111 [2024-05-15 11:12:02.749820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.111 [2024-05-15 11:12:02.760272] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.111 [2024-05-15 11:12:02.761229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-05-15 11:12:02.761246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.371 [2024-05-15 11:12:02.770889] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ec840 00:26:06.371 [2024-05-15 11:12:02.771863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.771881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.782602] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fac10 00:26:06.372 [2024-05-15 11:12:02.783559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.783575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.795118] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f1430 00:26:06.372 [2024-05-15 11:12:02.796335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.796352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.807705] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fd640 00:26:06.372 [2024-05-15 11:12:02.809292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.809310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.816926] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e88f8 00:26:06.372 [2024-05-15 11:12:02.817882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.817898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.829433] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f0788 00:26:06.372 [2024-05-15 11:12:02.830710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.830727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.840424] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f7100 00:26:06.372 [2024-05-15 11:12:02.841655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.841672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.852016] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190de038 00:26:06.372 [2024-05-15 11:12:02.853236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.853253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.864422] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f2510 00:26:06.372 [2024-05-15 11:12:02.865809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.865825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.877161] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190dfdc0 00:26:06.372 [2024-05-15 11:12:02.879015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.879031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.886369] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e99d8 00:26:06.372 [2024-05-15 11:12:02.887588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.887604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.897908] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ef6a8 00:26:06.372 [2024-05-15 11:12:02.898975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.898991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.909822] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190eb760 00:26:06.372 [2024-05-15 11:12:02.911189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.911205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.919396] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e5658 00:26:06.372 [2024-05-15 11:12:02.920184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.920200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.930767] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fb8b8 00:26:06.372 [2024-05-15 11:12:02.931645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.931661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.942889] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fcdd0 00:26:06.372 [2024-05-15 11:12:02.943745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.943761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.954304] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fcdd0 00:26:06.372 [2024-05-15 11:12:02.955191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.955209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.965726] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fcdd0 00:26:06.372 [2024-05-15 11:12:02.966565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.966583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.977149] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fcdd0 00:26:06.372 [2024-05-15 11:12:02.978034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.978050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:02.988602] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fcdd0 00:26:06.372 [2024-05-15 11:12:02.989476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:02.989492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:03.000249] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fcdd0 00:26:06.372 [2024-05-15 11:12:03.001133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:03.001148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:03.010920] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e8088 00:26:06.372 [2024-05-15 11:12:03.011791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:03.011807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.372 [2024-05-15 11:12:03.023088] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fcdd0 00:26:06.372 [2024-05-15 11:12:03.023945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.372 [2024-05-15 11:12:03.023961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.633 [2024-05-15 11:12:03.034550] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f92c0 00:26:06.634 [2024-05-15 11:12:03.035365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.035381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.045181] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f5be8 00:26:06.634 [2024-05-15 11:12:03.046021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.046038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.057324] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f8e88 00:26:06.634 [2024-05-15 11:12:03.058150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.058166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.068818] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fb8b8 00:26:06.634 [2024-05-15 11:12:03.069609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.069626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.080309] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190de470 00:26:06.634 [2024-05-15 11:12:03.081140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.081157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.091797] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e1f80 00:26:06.634 [2024-05-15 11:12:03.092620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.092636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.104696] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e6fa8 00:26:06.634 [2024-05-15 11:12:03.106141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.106157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.114624] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e1f80 00:26:06.634 [2024-05-15 11:12:03.115439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.115455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.126041] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e1f80 00:26:06.634 [2024-05-15 11:12:03.126859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.126876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.137468] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e1f80 00:26:06.634 [2024-05-15 11:12:03.138296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.138311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.150116] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f6890 00:26:06.634 [2024-05-15 11:12:03.151231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.151247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.162013] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fc560 00:26:06.634 [2024-05-15 11:12:03.163430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.163445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.171595] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fac10 00:26:06.634 [2024-05-15 11:12:03.172508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.172524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.183726] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e12d8 00:26:06.634 [2024-05-15 11:12:03.184688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.195162] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e12d8 00:26:06.634 [2024-05-15 11:12:03.196123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.196139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.206602] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e12d8 00:26:06.634 [2024-05-15 11:12:03.207518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.207533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.218050] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fac10 00:26:06.634 [2024-05-15 11:12:03.218991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.219007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.229509] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f7100 00:26:06.634 [2024-05-15 11:12:03.230452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.230468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.242414] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ddc00 00:26:06.634 [2024-05-15 11:12:03.243984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.243999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.254185] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e5a90 00:26:06.634 [2024-05-15 11:12:03.255709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.255724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.263406] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e38d0 00:26:06.634 [2024-05-15 11:12:03.264326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.264344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.634 [2024-05-15 11:12:03.275990] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e99d8 00:26:06.634 [2024-05-15 11:12:03.277197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.634 [2024-05-15 11:12:03.277213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.286946] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ea248 00:26:06.895 [2024-05-15 11:12:03.288119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.288135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.299320] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fda78 00:26:06.895 [2024-05-15 11:12:03.300775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.300790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.309253] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f6458 00:26:06.895 [2024-05-15 11:12:03.310094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.310110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.320737] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e1710 00:26:06.895 [2024-05-15 11:12:03.321570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.321586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.332199] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190eb760 00:26:06.895 [2024-05-15 11:12:03.332991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.333007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.343649] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fbcf0 00:26:06.895 [2024-05-15 11:12:03.344467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.344483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.356623] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ebfd0 00:26:06.895 [2024-05-15 11:12:03.358069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.358085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.366969] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e73e0 00:26:06.895 [2024-05-15 11:12:03.367918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.367934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.380092] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e7c50 00:26:06.895 [2024-05-15 11:12:03.381703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.381719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.391802] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f0ff8 00:26:06.895 [2024-05-15 11:12:03.393353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.393368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.401021] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fc998 00:26:06.895 [2024-05-15 11:12:03.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.401999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.413527] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f8e88 00:26:06.895 [2024-05-15 11:12:03.414736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.414752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.426738] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ed0b0 00:26:06.895 [2024-05-15 11:12:03.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.428597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.436667] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fdeb0 00:26:06.895 [2024-05-15 11:12:03.437919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.437935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.448142] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e27f0 00:26:06.895 [2024-05-15 11:12:03.449389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.449406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.459659] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f0bc0 00:26:06.895 [2024-05-15 11:12:03.460904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.460921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.471114] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f7da8 00:26:06.895 [2024-05-15 11:12:03.472357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.472373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.482600] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f4f40 00:26:06.895 [2024-05-15 11:12:03.483840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.483855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.493482] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e7c50 00:26:06.895 [2024-05-15 11:12:03.494701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.494717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.505050] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190eb328 00:26:06.895 [2024-05-15 11:12:03.506135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.506151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.516904] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f0bc0 00:26:06.895 [2024-05-15 11:12:03.518285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.518301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.526469] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.895 [2024-05-15 11:12:03.527415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.527432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.895 [2024-05-15 11:12:03.538623] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:06.895 [2024-05-15 11:12:03.539543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.895 [2024-05-15 11:12:03.539562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.550086] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f3a28 00:26:07.157 [2024-05-15 11:12:03.551007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.551023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.563207] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190de470 00:26:07.157 [2024-05-15 11:12:03.564708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.564726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.572414] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fc128 00:26:07.157 [2024-05-15 11:12:03.573272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.573289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.584778] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fb048 00:26:07.157 [2024-05-15 11:12:03.585858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.585874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.597502] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190de8a8 00:26:07.157 [2024-05-15 11:12:03.599038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.599054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.607080] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190df118 00:26:07.157 [2024-05-15 11:12:03.608167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.608183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.618445] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190df988 00:26:07.157 [2024-05-15 11:12:03.619502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.619517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.630028] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f8a50 00:26:07.157 [2024-05-15 11:12:03.631061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.631076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.644224] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e3060 00:26:07.157 [2024-05-15 11:12:03.646051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.646066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:07.157 [2024-05-15 11:12:03.653803] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fac10 00:26:07.157 [2024-05-15 11:12:03.655150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.157 [2024-05-15 11:12:03.655166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.664189] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e5a90 00:26:07.158 [2024-05-15 11:12:03.665083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.665099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.676928] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190eaab8 00:26:07.158 [2024-05-15 11:12:03.678251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.678266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.686499] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fdeb0 00:26:07.158 [2024-05-15 11:12:03.687371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.687387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.700140] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e5ec8 00:26:07.158 [2024-05-15 11:12:03.701600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.701615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.710134] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e49b0 00:26:07.158 [2024-05-15 11:12:03.711005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.711022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.721616] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f4b08 00:26:07.158 [2024-05-15 11:12:03.722489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.722504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.734523] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e3498 00:26:07.158 [2024-05-15 11:12:03.736029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.736044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.744889] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190df988 00:26:07.158 [2024-05-15 11:12:03.745915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.745930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.755723] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e01f8 00:26:07.158 [2024-05-15 11:12:03.756700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.756716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.767950] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190df118 00:26:07.158 [2024-05-15 11:12:03.768950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.768966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.779399] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e8088 00:26:07.158 [2024-05-15 11:12:03.780397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.780412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.790886] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f6020 00:26:07.158 [2024-05-15 11:12:03.791923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.791939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.158 [2024-05-15 11:12:03.802366] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f6458 00:26:07.158 [2024-05-15 11:12:03.803370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.158 [2024-05-15 11:12:03.803385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.813847] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e4140 00:26:07.421 [2024-05-15 11:12:03.814848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.814864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.825359] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f46d0 00:26:07.421 [2024-05-15 11:12:03.826367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.826383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.836845] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190dece0 00:26:07.421 [2024-05-15 11:12:03.837833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.837848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.849713] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e0a68 00:26:07.421 [2024-05-15 11:12:03.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.851363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.859674] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190dece0 00:26:07.421 [2024-05-15 11:12:03.860672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.860691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.871150] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e4de8 00:26:07.421 [2024-05-15 11:12:03.872169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.872184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.882677] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ee190 00:26:07.421 [2024-05-15 11:12:03.883670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.883686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.893400] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e3060 00:26:07.421 [2024-05-15 11:12:03.894392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.905140] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190dece0 00:26:07.421 [2024-05-15 11:12:03.906096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.906111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.917676] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190ed4e8 00:26:07.421 [2024-05-15 11:12:03.918929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.918946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.930264] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e8d30 00:26:07.421 [2024-05-15 11:12:03.931871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.931886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.939480] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f0ff8 00:26:07.421 [2024-05-15 11:12:03.940458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.940474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.951993] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190dece0 00:26:07.421 [2024-05-15 11:12:03.953222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.953238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.965230] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fe2e8 00:26:07.421 [2024-05-15 11:12:03.967094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.967110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.975254] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e2c28 00:26:07.421 [2024-05-15 11:12:03.976513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.976528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.985935] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fa7d8 00:26:07.421 [2024-05-15 11:12:03.987157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.987172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:03.997522] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f2510 00:26:07.421 [2024-05-15 11:12:03.998748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:03.998764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:04.011371] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190f4f40 00:26:07.421 [2024-05-15 11:12:04.013244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:04.013260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:04.020617] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190fb048 00:26:07.421 [2024-05-15 11:12:04.021803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:04.021819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:07.421 [2024-05-15 11:12:04.032969] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f710c0) with pdu=0x2000190e2c28 00:26:07.421 [2024-05-15 11:12:04.034303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.421 [2024-05-15 11:12:04.034319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:07.421 00:26:07.421 Latency(us) 00:26:07.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.421 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:07.421 nvme0n1 : 2.01 22222.60 86.81 0.00 0.00 5752.31 2129.92 15073.28 00:26:07.421 =================================================================================================================== 00:26:07.421 Total : 22222.60 86.81 0.00 0.00 5752.31 2129.92 15073.28 00:26:07.421 0 00:26:07.421 11:12:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:07.421 11:12:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:07.421 11:12:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:07.421 11:12:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:07.421 | .driver_specific 00:26:07.421 | .nvme_error 00:26:07.421 | .status_code 00:26:07.421 | .command_transient_transport_error' 00:26:07.683 11:12:04 -- host/digest.sh@71 -- # (( 174 > 0 )) 00:26:07.683 11:12:04 -- host/digest.sh@73 -- # killprocess 494982 00:26:07.683 11:12:04 -- common/autotest_common.sh@946 -- # '[' -z 494982 ']' 00:26:07.683 11:12:04 -- common/autotest_common.sh@950 -- # kill -0 494982 00:26:07.683 11:12:04 -- common/autotest_common.sh@951 -- # uname 00:26:07.683 11:12:04 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:07.683 11:12:04 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 494982 00:26:07.683 11:12:04 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:07.683 11:12:04 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:07.683 11:12:04 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 494982' 00:26:07.683 killing process with pid 494982 00:26:07.683 11:12:04 -- common/autotest_common.sh@965 -- # kill 494982 00:26:07.683 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.683 00:26:07.683 Latency(us) 00:26:07.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.683 =================================================================================================================== 00:26:07.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.683 11:12:04 -- common/autotest_common.sh@970 -- # wait 494982 00:26:07.944 11:12:04 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:07.944 11:12:04 -- host/digest.sh@54 -- # local rw bs qd 00:26:07.944 11:12:04 -- host/digest.sh@56 -- # rw=randwrite 00:26:07.944 11:12:04 -- host/digest.sh@56 -- # bs=131072 00:26:07.944 11:12:04 -- host/digest.sh@56 -- # qd=16 00:26:07.944 11:12:04 -- host/digest.sh@58 -- # bperfpid=495880 00:26:07.944 11:12:04 -- host/digest.sh@60 -- # waitforlisten 495880 /var/tmp/bperf.sock 00:26:07.944 11:12:04 -- common/autotest_common.sh@827 -- # '[' -z 495880 ']' 00:26:07.944 11:12:04 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:07.944 11:12:04 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.944 11:12:04 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:07.944 11:12:04 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.944 11:12:04 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:07.944 11:12:04 -- common/autotest_common.sh@10 -- # set +x 00:26:07.944 [2024-05-15 11:12:04.428542] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:07.944 [2024-05-15 11:12:04.428601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495880 ] 00:26:07.944 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:07.944 Zero copy mechanism will not be used. 00:26:07.944 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.944 [2024-05-15 11:12:04.503058] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.944 [2024-05-15 11:12:04.556004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.886 11:12:05 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:08.886 11:12:05 -- common/autotest_common.sh@860 -- # return 0 00:26:08.886 11:12:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.886 11:12:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.886 11:12:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:08.886 11:12:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.886 11:12:05 -- common/autotest_common.sh@10 -- # set +x 00:26:08.886 11:12:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.886 11:12:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.886 11:12:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.146 nvme0n1 00:26:09.146 11:12:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:09.146 11:12:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.146 11:12:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.146 11:12:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.146 11:12:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:09.146 11:12:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.146 Zero copy mechanism will not be used. 00:26:09.146 Running I/O for 2 seconds... 00:26:09.146 [2024-05-15 11:12:05.717119] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.146 [2024-05-15 11:12:05.717574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.146 [2024-05-15 11:12:05.717602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.146 [2024-05-15 11:12:05.728677] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.146 [2024-05-15 11:12:05.729051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.146 [2024-05-15 11:12:05.729071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.146 [2024-05-15 11:12:05.739989] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.146 [2024-05-15 11:12:05.740199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.146 [2024-05-15 11:12:05.740216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.147 [2024-05-15 11:12:05.751675] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.147 [2024-05-15 11:12:05.751943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.147 [2024-05-15 11:12:05.751961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.147 [2024-05-15 11:12:05.762838] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.147 [2024-05-15 11:12:05.763266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.147 [2024-05-15 11:12:05.763283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.147 [2024-05-15 11:12:05.775002] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.147 [2024-05-15 11:12:05.775323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.147 [2024-05-15 11:12:05.775340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.147 [2024-05-15 11:12:05.786605] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.147 [2024-05-15 11:12:05.786829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.147 [2024-05-15 11:12:05.786850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.147 [2024-05-15 11:12:05.797962] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.147 [2024-05-15 11:12:05.798171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.147 [2024-05-15 11:12:05.798187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.407 [2024-05-15 11:12:05.809667] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.407 [2024-05-15 11:12:05.810161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.407 [2024-05-15 11:12:05.810179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.407 [2024-05-15 11:12:05.821269] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.407 [2024-05-15 11:12:05.821588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.407 [2024-05-15 11:12:05.821605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.407 [2024-05-15 11:12:05.832299] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.407 [2024-05-15 11:12:05.832637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.407 [2024-05-15 11:12:05.832655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.407 [2024-05-15 11:12:05.843470] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.407 [2024-05-15 11:12:05.843695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.407 [2024-05-15 11:12:05.843711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.407 [2024-05-15 11:12:05.853852] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.407 [2024-05-15 11:12:05.854246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.407 [2024-05-15 11:12:05.854263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.407 [2024-05-15 11:12:05.865384] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.407 [2024-05-15 11:12:05.865630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.407 [2024-05-15 11:12:05.865646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.407 [2024-05-15 11:12:05.874231] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.407 [2024-05-15 11:12:05.874531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.407 [2024-05-15 11:12:05.874553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.884229] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.884536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.884558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.892623] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.892968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.892985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.902671] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.902906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.902921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.912823] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.913125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.913142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.922010] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.922402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.922419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.931345] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.931643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.931660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.942472] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.942788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.942806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.950486] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.950911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.950928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.960829] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.961163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.961180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.968617] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.968922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.968939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.976269] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.976468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.976484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.984665] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.985114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.985133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:05.994597] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:05.994964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:05.994981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:06.003163] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:06.003503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:06.003520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:06.012416] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:06.012834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:06.012851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:06.021308] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:06.021640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:06.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:06.029843] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:06.030203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:06.030220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:06.039530] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:06.039786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:06.039804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:06.044828] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:06.045028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:06.045044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.408 [2024-05-15 11:12:06.055032] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.408 [2024-05-15 11:12:06.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.408 [2024-05-15 11:12:06.055385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.063226] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.063594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.063611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.067974] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.068174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.068190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.072903] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.073102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.073118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.081510] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.081840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.081858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.086819] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.087018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.087034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.095068] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.095310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.095326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.106450] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.106849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.106865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.115347] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.115742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.115759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.125968] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.126335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.126352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.134010] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.134340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.134356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.142445] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.142761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.669 [2024-05-15 11:12:06.142778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.669 [2024-05-15 11:12:06.153431] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.669 [2024-05-15 11:12:06.153744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.153761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.164099] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.164518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.164536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.175341] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.175421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.175436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.187055] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.187278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.187293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.198524] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.198962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.198979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.210426] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.210864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.210881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.222143] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.222522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.222539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.233723] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.234183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.234200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.245641] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.245864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.245881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.257366] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.257672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.257690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.269078] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.269461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.269478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.280604] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.280934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.280951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.292650] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.292945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.292966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.304215] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.304572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.304590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.670 [2024-05-15 11:12:06.315467] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.670 [2024-05-15 11:12:06.315722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.670 [2024-05-15 11:12:06.315739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.326484] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.326805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.326822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.337904] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.338354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.338372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.349810] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.350205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.350223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.361336] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.361686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.361703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.370910] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.371319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.371335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.381698] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.381917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.381933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.393148] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.393360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.393376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.404011] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.404210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.404227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.414749] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.415128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.415145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.425762] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.426158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.426175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.436605] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.437009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.437026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.447228] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.447443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.447459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.457660] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.457947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.457964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.468476] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.468704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.930 [2024-05-15 11:12:06.468720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.930 [2024-05-15 11:12:06.479261] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.930 [2024-05-15 11:12:06.479607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.479625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.488917] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.489227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.489243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.500538] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.500885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.500900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.512195] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.512496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.512512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.524201] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.524444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.524458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.534578] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.534689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.534704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.545854] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.546174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.546190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.554431] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.554538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.554557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.565267] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.565567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.565583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.931 [2024-05-15 11:12:06.575901] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:09.931 [2024-05-15 11:12:06.576020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.931 [2024-05-15 11:12:06.576035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.191 [2024-05-15 11:12:06.586177] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.191 [2024-05-15 11:12:06.586405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.586420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.597403] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.597650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.597665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.604763] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.604814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.604829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.610976] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.611035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.614949] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.615000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.615015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.618929] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.619108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.619123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.627457] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.627516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.627530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.635996] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.636054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.636068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.640564] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.640617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.640632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.644620] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.644678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.644693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.650630] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.650699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.650713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.658994] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.659053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.659067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.663621] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.663679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.663694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.671594] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.671646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.671661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.679257] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.679333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.679347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.685697] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.685975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.685991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.693356] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.693407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.693424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.697266] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.697317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.697332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.701331] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.701380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.701394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.705654] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.705704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.705718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.709393] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.709446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.709460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.715780] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.716022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.716038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.720329] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.720385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.720400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.724284] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.724333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.724348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.728363] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.728424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.733976] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.734043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.734059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.742676] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.742996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.192 [2024-05-15 11:12:06.743012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.192 [2024-05-15 11:12:06.747710] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.192 [2024-05-15 11:12:06.747766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.747781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.751486] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.751553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.759388] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.759443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.759458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.763429] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.763483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.763498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.768600] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.768687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.768702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.777860] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.778152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.778168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.787442] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.787506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.787521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.794704] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.794759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.794773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.798965] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.799015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.799031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.804194] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.804244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.804259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.809948] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.810006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.810021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.816958] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.817034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.817049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.822193] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.822246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.822260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.828258] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.828335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.828350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.193 [2024-05-15 11:12:06.834414] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.193 [2024-05-15 11:12:06.834465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.193 [2024-05-15 11:12:06.834480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.453 [2024-05-15 11:12:06.843354] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.453 [2024-05-15 11:12:06.843621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.453 [2024-05-15 11:12:06.843640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.850492] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.850558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.850572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.858480] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.858539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.858560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.867723] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.867884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.867898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.874220] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.874271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.874286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.878514] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.878570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.878585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.882656] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.882723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.886475] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.886532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.886550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.890539] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.890622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.890637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.894480] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.894538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.894557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.901954] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.902030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.902044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.907330] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.907396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.907411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.911641] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.911708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.911723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.920191] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.920251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.920266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.927443] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.927508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.927523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.933658] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.933744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.933758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.942063] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.942117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.942132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.946282] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.946338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.946352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.950359] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.950410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.950424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.954898] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.954948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.954963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.959517] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.959577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.959593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.965382] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.965656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.965673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.971631] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.971683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.971698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.975568] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.975624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.975639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.980059] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.980131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.980145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.987425] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.987505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.987520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:06.995273] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:06.995358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:06.995375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.005993] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.006213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.006229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.016510] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.016605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.016620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.028196] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.028514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.028530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.038298] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.038564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.038581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.049126] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.049333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.049349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.060832] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.061138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.061153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.072502] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.072790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.083967] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.084271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.084287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.454 [2024-05-15 11:12:07.095329] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.454 [2024-05-15 11:12:07.095527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.454 [2024-05-15 11:12:07.095541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.106903] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.107179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.107194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.118410] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.118612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.118627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.128711] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.128769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.128784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.137587] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.137866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.137882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.144031] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.144111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.144125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.152889] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.152951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.152965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.161697] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.161759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.161774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.167590] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.167649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.167668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.175197] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.175464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.175479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.180521] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.180579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.180594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.188235] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.188298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.188313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.195038] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.195092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.195107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.203168] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.203237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.203252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.208475] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.208555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.208569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.217514] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.217587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.217603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.222630] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.222682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.222697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.232017] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.232265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.232281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.242947] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.243181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.243196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.254193] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.254430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.254446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.264057] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.264140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.264155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.276042] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.276274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.276289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.287526] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.287822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.287839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.299175] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.299246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.299261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.310406] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.310505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.310521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.320375] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.715 [2024-05-15 11:12:07.320614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.715 [2024-05-15 11:12:07.320629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.715 [2024-05-15 11:12:07.332418] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.716 [2024-05-15 11:12:07.332674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.716 [2024-05-15 11:12:07.332690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.716 [2024-05-15 11:12:07.343769] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.716 [2024-05-15 11:12:07.344088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.716 [2024-05-15 11:12:07.344104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.716 [2024-05-15 11:12:07.355271] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.716 [2024-05-15 11:12:07.355544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.716 [2024-05-15 11:12:07.355564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.716 [2024-05-15 11:12:07.366064] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.716 [2024-05-15 11:12:07.366360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.716 [2024-05-15 11:12:07.366376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.377570] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.377855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.377871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.388643] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.388931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.388947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.400666] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.400939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.400955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.411540] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.411714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.411729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.421983] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.422216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.422235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.434517] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.434790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.434807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.444924] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.445182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.445199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.456543] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.456928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.456943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.468176] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.468505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.468520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.475173] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.475240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.475255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.479136] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.479193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.479208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.483220] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.483280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.483295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.487247] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.487298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.487314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.491025] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.491083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.491098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.496786] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.496849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.496863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.502282] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.502334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.502349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.506029] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.506080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.506094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.509823] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.509888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.509903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.513588] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.513641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.513656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.517342] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.517402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.517416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.521638] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.521697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.521711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.525799] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.525851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.525866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.529510] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.529568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.529583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.533384] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.533436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.533450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.537238] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.537289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.537304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.978 [2024-05-15 11:12:07.540985] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.978 [2024-05-15 11:12:07.541038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.978 [2024-05-15 11:12:07.541052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.544925] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.544976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.544990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.548650] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.548702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.548717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.552342] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.552393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.552408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.556146] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.556213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.556227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.559875] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.559926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.564144] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.564218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.564233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.568564] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.568634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.568648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.572836] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.572916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.572931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.577153] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.577213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.577227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.580873] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.580928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.580943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.584643] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.584696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.584712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.588325] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.588376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.588391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.592036] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.592090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.592105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.595746] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.595820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.595835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.599421] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.599473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.599488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.603125] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.603177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.603191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.606935] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.606994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.607009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.611523] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.611611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.611626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.616054] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.616107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.616122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.620364] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.620415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.620430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.979 [2024-05-15 11:12:07.625871] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:10.979 [2024-05-15 11:12:07.626086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.979 [2024-05-15 11:12:07.626102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.240 [2024-05-15 11:12:07.632106] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.240 [2024-05-15 11:12:07.632159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.240 [2024-05-15 11:12:07.632177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.240 [2024-05-15 11:12:07.636493] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.240 [2024-05-15 11:12:07.636557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.240 [2024-05-15 11:12:07.636573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.240 [2024-05-15 11:12:07.640671] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.240 [2024-05-15 11:12:07.640723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.640737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.644432] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.644485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.644500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.648626] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.648739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.648754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.653600] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.653670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.653684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.662296] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.662368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.662383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.669664] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.669746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.669761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.674064] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.674140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.674155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.681783] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.682075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.682091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.687851] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.687916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.687931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.697398] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.697480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.697495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.706630] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.706687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.706702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.241 [2024-05-15 11:12:07.711662] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f71470) with pdu=0x2000190fef90 00:26:11.241 [2024-05-15 11:12:07.711763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.241 [2024-05-15 11:12:07.711779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.241 00:26:11.241 Latency(us) 00:26:11.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.241 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:11.241 nvme0n1 : 2.00 3919.03 489.88 0.00 0.00 4076.69 1774.93 12397.23 00:26:11.241 =================================================================================================================== 00:26:11.241 Total : 3919.03 489.88 0.00 0.00 4076.69 1774.93 12397.23 00:26:11.241 0 00:26:11.241 11:12:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:11.241 11:12:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:11.241 11:12:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:11.241 11:12:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:11.241 | .driver_specific 00:26:11.241 | .nvme_error 00:26:11.241 | .status_code 00:26:11.241 | .command_transient_transport_error' 00:26:11.503 11:12:07 -- host/digest.sh@71 -- # (( 253 > 0 )) 00:26:11.503 11:12:07 -- host/digest.sh@73 -- # killprocess 495880 00:26:11.503 11:12:07 -- common/autotest_common.sh@946 -- # '[' -z 495880 ']' 00:26:11.503 11:12:07 -- common/autotest_common.sh@950 -- # kill -0 495880 00:26:11.503 11:12:07 -- common/autotest_common.sh@951 -- # uname 00:26:11.503 11:12:07 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:11.503 11:12:07 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 495880 00:26:11.503 11:12:07 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:11.503 11:12:07 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:11.503 11:12:07 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 495880' 00:26:11.503 killing process with pid 495880 00:26:11.503 11:12:07 -- common/autotest_common.sh@965 -- # kill 495880 00:26:11.503 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.503 00:26:11.503 Latency(us) 00:26:11.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.503 =================================================================================================================== 00:26:11.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.503 11:12:07 -- common/autotest_common.sh@970 -- # wait 495880 00:26:11.503 11:12:08 -- host/digest.sh@116 -- # killprocess 493366 00:26:11.503 11:12:08 -- common/autotest_common.sh@946 -- # '[' -z 493366 ']' 00:26:11.503 11:12:08 -- common/autotest_common.sh@950 -- # kill -0 493366 00:26:11.503 11:12:08 -- common/autotest_common.sh@951 -- # uname 00:26:11.503 11:12:08 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:11.503 11:12:08 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 493366 00:26:11.503 11:12:08 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:11.503 11:12:08 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:11.503 11:12:08 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 493366' 00:26:11.503 killing process with pid 493366 00:26:11.503 11:12:08 -- common/autotest_common.sh@965 -- # kill 493366 00:26:11.503 [2024-05-15 11:12:08.126120] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:11.503 11:12:08 -- common/autotest_common.sh@970 -- # wait 493366 00:26:11.765 00:26:11.765 real 0m16.067s 00:26:11.765 user 0m31.667s 00:26:11.765 sys 0m3.348s 00:26:11.765 11:12:08 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:11.765 11:12:08 -- common/autotest_common.sh@10 -- # set +x 00:26:11.765 ************************************ 00:26:11.765 END TEST nvmf_digest_error 00:26:11.765 ************************************ 00:26:11.765 11:12:08 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:11.765 11:12:08 -- host/digest.sh@150 -- # nvmftestfini 00:26:11.765 11:12:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:11.765 11:12:08 -- nvmf/common.sh@117 -- # sync 00:26:11.765 11:12:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.765 11:12:08 -- nvmf/common.sh@120 -- # set +e 00:26:11.765 11:12:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.765 11:12:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.765 rmmod nvme_tcp 00:26:11.765 rmmod nvme_fabrics 00:26:11.765 rmmod nvme_keyring 00:26:11.765 11:12:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.765 11:12:08 -- nvmf/common.sh@124 -- # set -e 00:26:11.765 11:12:08 -- nvmf/common.sh@125 -- # return 0 00:26:11.765 11:12:08 -- nvmf/common.sh@478 -- # '[' -n 493366 ']' 00:26:11.765 11:12:08 -- nvmf/common.sh@479 -- # killprocess 493366 00:26:11.765 11:12:08 -- common/autotest_common.sh@946 -- # '[' -z 493366 ']' 00:26:11.765 11:12:08 -- common/autotest_common.sh@950 -- # kill -0 493366 00:26:11.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (493366) - No such process 00:26:11.765 11:12:08 -- common/autotest_common.sh@973 -- # echo 'Process with pid 493366 is not found' 00:26:11.765 Process with pid 493366 is not found 00:26:11.765 11:12:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:11.765 11:12:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:11.765 11:12:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:11.765 11:12:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.765 11:12:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.765 11:12:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.765 11:12:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.765 11:12:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.315 11:12:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.315 00:26:14.315 real 0m41.754s 00:26:14.315 user 1m5.413s 00:26:14.315 sys 0m12.126s 00:26:14.315 11:12:10 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:14.315 11:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:14.315 ************************************ 00:26:14.315 END TEST nvmf_digest 00:26:14.315 ************************************ 00:26:14.315 11:12:10 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:26:14.315 11:12:10 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:26:14.315 11:12:10 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:26:14.315 11:12:10 -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:14.315 11:12:10 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:14.315 11:12:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:14.315 11:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:14.315 ************************************ 00:26:14.315 START TEST nvmf_bdevperf 00:26:14.315 ************************************ 00:26:14.315 11:12:10 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:14.315 * Looking for test storage... 00:26:14.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:14.315 11:12:10 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.315 11:12:10 -- nvmf/common.sh@7 -- # uname -s 00:26:14.315 11:12:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.315 11:12:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.315 11:12:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.315 11:12:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.315 11:12:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.315 11:12:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.315 11:12:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.315 11:12:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.315 11:12:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.315 11:12:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.315 11:12:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:14.315 11:12:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:14.315 11:12:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.315 11:12:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.315 11:12:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.315 11:12:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.315 11:12:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.315 11:12:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.315 11:12:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.315 11:12:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.315 11:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.315 11:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.315 11:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.315 11:12:10 -- paths/export.sh@5 -- # export PATH 00:26:14.315 11:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.315 11:12:10 -- nvmf/common.sh@47 -- # : 0 00:26:14.315 11:12:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.315 11:12:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.315 11:12:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.315 11:12:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.315 11:12:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.315 11:12:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.315 11:12:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.315 11:12:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.315 11:12:10 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.315 11:12:10 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.315 11:12:10 -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:14.316 11:12:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:14.316 11:12:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.316 11:12:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:14.316 11:12:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:14.316 11:12:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:14.316 11:12:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.316 11:12:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.316 11:12:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.316 11:12:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:14.316 11:12:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:14.316 11:12:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.316 11:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:20.905 11:12:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:20.905 11:12:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:20.905 11:12:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:20.905 11:12:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:20.905 11:12:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:20.905 11:12:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:20.905 11:12:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:20.905 11:12:17 -- nvmf/common.sh@295 -- # net_devs=() 00:26:20.905 11:12:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:20.905 11:12:17 -- nvmf/common.sh@296 -- # e810=() 00:26:20.905 11:12:17 -- nvmf/common.sh@296 -- # local -ga e810 00:26:20.905 11:12:17 -- nvmf/common.sh@297 -- # x722=() 00:26:20.905 11:12:17 -- nvmf/common.sh@297 -- # local -ga x722 00:26:20.905 11:12:17 -- nvmf/common.sh@298 -- # mlx=() 00:26:20.905 11:12:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:20.905 11:12:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.905 11:12:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:20.905 11:12:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:20.905 11:12:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:20.905 11:12:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.905 11:12:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:20.905 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:20.905 11:12:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.905 11:12:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:20.905 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:20.905 11:12:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:20.905 11:12:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.905 11:12:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.905 11:12:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:20.905 11:12:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.905 11:12:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:20.905 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:20.905 11:12:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.905 11:12:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.905 11:12:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.905 11:12:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:20.905 11:12:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.905 11:12:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:20.905 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:20.905 11:12:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.905 11:12:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:20.905 11:12:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:20.905 11:12:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:20.905 11:12:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:20.905 11:12:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.905 11:12:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.905 11:12:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.905 11:12:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:20.905 11:12:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.905 11:12:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.905 11:12:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:20.905 11:12:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.905 11:12:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.905 11:12:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:20.905 11:12:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:20.905 11:12:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.905 11:12:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.166 11:12:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.166 11:12:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.166 11:12:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:21.166 11:12:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.166 11:12:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.166 11:12:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.166 11:12:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:21.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:26:21.166 00:26:21.166 --- 10.0.0.2 ping statistics --- 00:26:21.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.166 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:26:21.166 11:12:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:26:21.166 00:26:21.166 --- 10.0.0.1 ping statistics --- 00:26:21.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.166 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:26:21.166 11:12:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.166 11:12:17 -- nvmf/common.sh@411 -- # return 0 00:26:21.166 11:12:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:21.166 11:12:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.166 11:12:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:21.166 11:12:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:21.166 11:12:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.166 11:12:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:21.166 11:12:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:21.166 11:12:17 -- host/bdevperf.sh@25 -- # tgt_init 00:26:21.166 11:12:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:21.166 11:12:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:21.166 11:12:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:21.166 11:12:17 -- common/autotest_common.sh@10 -- # set +x 00:26:21.166 11:12:17 -- nvmf/common.sh@470 -- # nvmfpid=501140 00:26:21.166 11:12:17 -- nvmf/common.sh@471 -- # waitforlisten 501140 00:26:21.166 11:12:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:21.166 11:12:17 -- common/autotest_common.sh@827 -- # '[' -z 501140 ']' 00:26:21.166 11:12:17 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.166 11:12:17 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:21.166 11:12:17 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.166 11:12:17 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:21.166 11:12:17 -- common/autotest_common.sh@10 -- # set +x 00:26:21.425 [2024-05-15 11:12:17.839658] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:21.425 [2024-05-15 11:12:17.839722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.425 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.425 [2024-05-15 11:12:17.925392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:21.425 [2024-05-15 11:12:18.019793] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.425 [2024-05-15 11:12:18.019845] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.425 [2024-05-15 11:12:18.019854] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.425 [2024-05-15 11:12:18.019861] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.425 [2024-05-15 11:12:18.019867] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.425 [2024-05-15 11:12:18.020003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.425 [2024-05-15 11:12:18.020170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.425 [2024-05-15 11:12:18.020171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.993 11:12:18 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:21.993 11:12:18 -- common/autotest_common.sh@860 -- # return 0 00:26:21.993 11:12:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:21.993 11:12:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:21.993 11:12:18 -- common/autotest_common.sh@10 -- # set +x 00:26:21.993 11:12:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.254 11:12:18 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.254 11:12:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.254 11:12:18 -- common/autotest_common.sh@10 -- # set +x 00:26:22.254 [2024-05-15 11:12:18.653398] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.254 11:12:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.254 11:12:18 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:22.254 11:12:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.254 11:12:18 -- common/autotest_common.sh@10 -- # set +x 00:26:22.254 Malloc0 00:26:22.254 11:12:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.254 11:12:18 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.254 11:12:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.254 11:12:18 -- common/autotest_common.sh@10 -- # set +x 00:26:22.254 11:12:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.254 11:12:18 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:22.254 11:12:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.254 11:12:18 -- common/autotest_common.sh@10 -- # set +x 00:26:22.254 11:12:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.254 11:12:18 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.254 11:12:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.254 11:12:18 -- common/autotest_common.sh@10 -- # set +x 00:26:22.254 [2024-05-15 11:12:18.723789] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:22.254 [2024-05-15 11:12:18.724003] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.254 11:12:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.254 11:12:18 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:22.254 11:12:18 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:22.254 11:12:18 -- nvmf/common.sh@521 -- # config=() 00:26:22.254 11:12:18 -- nvmf/common.sh@521 -- # local subsystem config 00:26:22.254 11:12:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:22.254 11:12:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:22.254 { 00:26:22.254 "params": { 00:26:22.254 "name": "Nvme$subsystem", 00:26:22.254 "trtype": "$TEST_TRANSPORT", 00:26:22.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.254 "adrfam": "ipv4", 00:26:22.254 "trsvcid": "$NVMF_PORT", 00:26:22.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.254 "hdgst": ${hdgst:-false}, 00:26:22.254 "ddgst": ${ddgst:-false} 00:26:22.254 }, 00:26:22.254 "method": "bdev_nvme_attach_controller" 00:26:22.254 } 00:26:22.254 EOF 00:26:22.254 )") 00:26:22.254 11:12:18 -- nvmf/common.sh@543 -- # cat 00:26:22.254 11:12:18 -- nvmf/common.sh@545 -- # jq . 00:26:22.254 11:12:18 -- nvmf/common.sh@546 -- # IFS=, 00:26:22.254 11:12:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:22.254 "params": { 00:26:22.254 "name": "Nvme1", 00:26:22.254 "trtype": "tcp", 00:26:22.254 "traddr": "10.0.0.2", 00:26:22.254 "adrfam": "ipv4", 00:26:22.254 "trsvcid": "4420", 00:26:22.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.254 "hdgst": false, 00:26:22.254 "ddgst": false 00:26:22.254 }, 00:26:22.254 "method": "bdev_nvme_attach_controller" 00:26:22.254 }' 00:26:22.254 [2024-05-15 11:12:18.775781] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:22.254 [2024-05-15 11:12:18.775827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501491 ] 00:26:22.254 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.254 [2024-05-15 11:12:18.833587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.254 [2024-05-15 11:12:18.897652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.512 Running I/O for 1 seconds... 00:26:23.891 00:26:23.891 Latency(us) 00:26:23.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.891 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:23.891 Verification LBA range: start 0x0 length 0x4000 00:26:23.891 Nvme1n1 : 1.01 9248.94 36.13 0.00 0.00 13770.81 2621.44 15619.41 00:26:23.891 =================================================================================================================== 00:26:23.891 Total : 9248.94 36.13 0.00 0.00 13770.81 2621.44 15619.41 00:26:23.891 11:12:20 -- host/bdevperf.sh@30 -- # bdevperfpid=501757 00:26:23.891 11:12:20 -- host/bdevperf.sh@32 -- # sleep 3 00:26:23.891 11:12:20 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:23.891 11:12:20 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:23.891 11:12:20 -- nvmf/common.sh@521 -- # config=() 00:26:23.891 11:12:20 -- nvmf/common.sh@521 -- # local subsystem config 00:26:23.891 11:12:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:23.891 11:12:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:23.891 { 00:26:23.891 "params": { 00:26:23.891 "name": "Nvme$subsystem", 00:26:23.891 "trtype": "$TEST_TRANSPORT", 00:26:23.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.891 "adrfam": "ipv4", 00:26:23.891 "trsvcid": "$NVMF_PORT", 00:26:23.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.891 "hdgst": ${hdgst:-false}, 00:26:23.891 "ddgst": ${ddgst:-false} 00:26:23.891 }, 00:26:23.891 "method": "bdev_nvme_attach_controller" 00:26:23.891 } 00:26:23.891 EOF 00:26:23.891 )") 00:26:23.891 11:12:20 -- nvmf/common.sh@543 -- # cat 00:26:23.891 11:12:20 -- nvmf/common.sh@545 -- # jq . 00:26:23.891 11:12:20 -- nvmf/common.sh@546 -- # IFS=, 00:26:23.891 11:12:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:23.891 "params": { 00:26:23.891 "name": "Nvme1", 00:26:23.891 "trtype": "tcp", 00:26:23.891 "traddr": "10.0.0.2", 00:26:23.891 "adrfam": "ipv4", 00:26:23.891 "trsvcid": "4420", 00:26:23.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:23.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:23.891 "hdgst": false, 00:26:23.891 "ddgst": false 00:26:23.891 }, 00:26:23.891 "method": "bdev_nvme_attach_controller" 00:26:23.891 }' 00:26:23.891 [2024-05-15 11:12:20.356138] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:23.891 [2024-05-15 11:12:20.356191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501757 ] 00:26:23.891 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.891 [2024-05-15 11:12:20.414561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.891 [2024-05-15 11:12:20.477750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.151 Running I/O for 15 seconds... 00:26:26.711 11:12:23 -- host/bdevperf.sh@33 -- # kill -9 501140 00:26:26.711 11:12:23 -- host/bdevperf.sh@35 -- # sleep 3 00:26:26.711 [2024-05-15 11:12:23.325653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.711 [2024-05-15 11:12:23.325697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.711 [2024-05-15 11:12:23.325718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.711 [2024-05-15 11:12:23.325728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.711 [2024-05-15 11:12:23.325740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.711 [2024-05-15 11:12:23.325748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.711 [2024-05-15 11:12:23.325757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.711 [2024-05-15 11:12:23.325766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.711 [2024-05-15 11:12:23.325776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.711 [2024-05-15 11:12:23.325785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.711 [2024-05-15 11:12:23.325795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.711 [2024-05-15 11:12:23.325805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.325978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.325987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.326988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.326996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.712 [2024-05-15 11:12:23.327197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.712 [2024-05-15 11:12:23.327206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-05-15 11:12:23.327699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.713 [2024-05-15 11:12:23.327963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.327972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189560 is same with the state(5) to be set 00:26:26.713 [2024-05-15 11:12:23.327981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.713 [2024-05-15 11:12:23.327987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.713 [2024-05-15 11:12:23.327994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:26:26.713 [2024-05-15 11:12:23.328001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.713 [2024-05-15 11:12:23.328039] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1189560 was disconnected and freed. reset controller. 00:26:26.713 [2024-05-15 11:12:23.331610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.713 [2024-05-15 11:12:23.331656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.713 [2024-05-15 11:12:23.332449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.713 [2024-05-15 11:12:23.333315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.713 [2024-05-15 11:12:23.333341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.713 [2024-05-15 11:12:23.333351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.713 [2024-05-15 11:12:23.333606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.713 [2024-05-15 11:12:23.333850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.713 [2024-05-15 11:12:23.333863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.713 [2024-05-15 11:12:23.333873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.713 [2024-05-15 11:12:23.337788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.713 [2024-05-15 11:12:23.346100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.713 [2024-05-15 11:12:23.346596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.713 [2024-05-15 11:12:23.346856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.713 [2024-05-15 11:12:23.346867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.713 [2024-05-15 11:12:23.346876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.713 [2024-05-15 11:12:23.347118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.713 [2024-05-15 11:12:23.347359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.713 [2024-05-15 11:12:23.347368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.713 [2024-05-15 11:12:23.347375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.713 [2024-05-15 11:12:23.351289] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.975 [2024-05-15 11:12:23.360322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.975 [2024-05-15 11:12:23.360903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.361194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.361205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.975 [2024-05-15 11:12:23.361214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.975 [2024-05-15 11:12:23.361455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.975 [2024-05-15 11:12:23.361704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.975 [2024-05-15 11:12:23.361713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.975 [2024-05-15 11:12:23.361720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.975 [2024-05-15 11:12:23.365637] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.975 [2024-05-15 11:12:23.374641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.975 [2024-05-15 11:12:23.375071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.375411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.375422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.975 [2024-05-15 11:12:23.375429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.975 [2024-05-15 11:12:23.375683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.975 [2024-05-15 11:12:23.375927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.975 [2024-05-15 11:12:23.375935] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.975 [2024-05-15 11:12:23.375946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.975 [2024-05-15 11:12:23.379858] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.975 [2024-05-15 11:12:23.388866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.975 [2024-05-15 11:12:23.389498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.389887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.389902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.975 [2024-05-15 11:12:23.389912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.975 [2024-05-15 11:12:23.390173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.975 [2024-05-15 11:12:23.390418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.975 [2024-05-15 11:12:23.390426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.975 [2024-05-15 11:12:23.390434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.975 [2024-05-15 11:12:23.394354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.975 [2024-05-15 11:12:23.403137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.975 [2024-05-15 11:12:23.403696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.403990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.404000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.975 [2024-05-15 11:12:23.404008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.975 [2024-05-15 11:12:23.404250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.975 [2024-05-15 11:12:23.404490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.975 [2024-05-15 11:12:23.404499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.975 [2024-05-15 11:12:23.404506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.975 [2024-05-15 11:12:23.408422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.975 [2024-05-15 11:12:23.417434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.975 [2024-05-15 11:12:23.418071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.418408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.418421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.975 [2024-05-15 11:12:23.418431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.975 [2024-05-15 11:12:23.418698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.975 [2024-05-15 11:12:23.418944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.975 [2024-05-15 11:12:23.418953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.975 [2024-05-15 11:12:23.418961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.975 [2024-05-15 11:12:23.422886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.975 [2024-05-15 11:12:23.431704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.975 [2024-05-15 11:12:23.432295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.432608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.975 [2024-05-15 11:12:23.432620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.432628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.432868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.433109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.433119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.433126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.437041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.446046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.446680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.447010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.447025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.447035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.447296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.447541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.447560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.447568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.451477] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.460243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.460712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.461029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.461041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.461048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.461289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.461530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.461539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.461557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.465467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.474463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.475021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.475318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.475329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.475336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.475583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.475825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.475834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.475841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.479754] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.488760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.489399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.489751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.489767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.489776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.490037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.490281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.490289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.490296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.494219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.502998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.503582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.504366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.504387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.504395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.504651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.504894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.504903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.504910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.508832] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.517374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.517947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.518274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.518286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.518293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.518535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.518782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.518791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.518798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.522735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.531741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.532288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.532618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.532629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.532637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.532878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.533119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.533127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.533135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.537047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.546052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.546629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.546945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.546955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.546963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.547204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.547445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.547454] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.547461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.551379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.560383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.560945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.561263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.561274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.561285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.561526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.561774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.561783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.561790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.565703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.574705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.575383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.575759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.976 [2024-05-15 11:12:23.575775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.976 [2024-05-15 11:12:23.575784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.976 [2024-05-15 11:12:23.576045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.976 [2024-05-15 11:12:23.576289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.976 [2024-05-15 11:12:23.576298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.976 [2024-05-15 11:12:23.576306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.976 [2024-05-15 11:12:23.580225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.976 [2024-05-15 11:12:23.588992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.976 [2024-05-15 11:12:23.589610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:12:23.589973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:12:23.589986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.977 [2024-05-15 11:12:23.589996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.977 [2024-05-15 11:12:23.590256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.977 [2024-05-15 11:12:23.590500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.977 [2024-05-15 11:12:23.590509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.977 [2024-05-15 11:12:23.590517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.977 [2024-05-15 11:12:23.594435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.977 [2024-05-15 11:12:23.603201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.977 [2024-05-15 11:12:23.603836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:12:23.604182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:12:23.604195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.977 [2024-05-15 11:12:23.604204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.977 [2024-05-15 11:12:23.604469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.977 [2024-05-15 11:12:23.604719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.977 [2024-05-15 11:12:23.604729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.977 [2024-05-15 11:12:23.604737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.977 [2024-05-15 11:12:23.608652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.977 [2024-05-15 11:12:23.617416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.977 [2024-05-15 11:12:23.618067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:12:23.618286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.977 [2024-05-15 11:12:23.618299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:26.977 [2024-05-15 11:12:23.618308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:26.977 [2024-05-15 11:12:23.618577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:26.977 [2024-05-15 11:12:23.618822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.977 [2024-05-15 11:12:23.618831] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.977 [2024-05-15 11:12:23.618839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.977 [2024-05-15 11:12:23.622746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.239 [2024-05-15 11:12:23.631748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.239 [2024-05-15 11:12:23.632298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.632610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.632624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.239 [2024-05-15 11:12:23.632632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.239 [2024-05-15 11:12:23.632874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.239 [2024-05-15 11:12:23.633115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.239 [2024-05-15 11:12:23.633124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.239 [2024-05-15 11:12:23.633131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.239 [2024-05-15 11:12:23.637041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.239 [2024-05-15 11:12:23.646030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.239 [2024-05-15 11:12:23.646530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.646843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.646854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.239 [2024-05-15 11:12:23.646861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.239 [2024-05-15 11:12:23.647102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.239 [2024-05-15 11:12:23.647347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.239 [2024-05-15 11:12:23.647355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.239 [2024-05-15 11:12:23.647362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.239 [2024-05-15 11:12:23.651269] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.239 [2024-05-15 11:12:23.660259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.239 [2024-05-15 11:12:23.660776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.661095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.661105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.239 [2024-05-15 11:12:23.661113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.239 [2024-05-15 11:12:23.661354] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.239 [2024-05-15 11:12:23.661606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.239 [2024-05-15 11:12:23.661615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.239 [2024-05-15 11:12:23.661622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.239 [2024-05-15 11:12:23.665525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.239 [2024-05-15 11:12:23.674519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.239 [2024-05-15 11:12:23.675165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.675503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.675518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.239 [2024-05-15 11:12:23.675528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.239 [2024-05-15 11:12:23.675795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.239 [2024-05-15 11:12:23.676041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.239 [2024-05-15 11:12:23.676050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.239 [2024-05-15 11:12:23.676058] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.239 [2024-05-15 11:12:23.679971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.239 [2024-05-15 11:12:23.688741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.239 [2024-05-15 11:12:23.689254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.689463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.689477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.239 [2024-05-15 11:12:23.689487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.239 [2024-05-15 11:12:23.689754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.239 [2024-05-15 11:12:23.689999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.239 [2024-05-15 11:12:23.690012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.239 [2024-05-15 11:12:23.690020] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.239 [2024-05-15 11:12:23.693932] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.239 [2024-05-15 11:12:23.702929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.239 [2024-05-15 11:12:23.703482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.703826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.239 [2024-05-15 11:12:23.703838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.239 [2024-05-15 11:12:23.703846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.239 [2024-05-15 11:12:23.704087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.239 [2024-05-15 11:12:23.704328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.239 [2024-05-15 11:12:23.704337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.704344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.708255] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.717245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.717868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.718193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.718206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.718215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.718476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.718726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.718737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.718744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.722657] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.731423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.731998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.732310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.732321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.732329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.732575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.732817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.732825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.732837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.736758] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.745752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.746295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.746608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.746620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.746628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.746868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.747109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.747117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.747124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.751030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.760018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.760682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.760977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.760991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.761000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.761260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.761505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.761514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.761521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.765446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.774216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.774909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.775128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.775142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.775151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.775411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.775664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.775673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.775681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.779602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.788597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.789106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.789373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.789384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.789392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.789639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.789880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.789888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.789895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.793799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.802792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.803389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.803691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.803708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.803717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.803978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.804223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.804232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.804240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.808153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.817149] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.817703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.818007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.818018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.818026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.818267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.818508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.818516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.818523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.822437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.831438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.831893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.832191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.832202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.832209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.240 [2024-05-15 11:12:23.832450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.240 [2024-05-15 11:12:23.832700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.240 [2024-05-15 11:12:23.832710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.240 [2024-05-15 11:12:23.832717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.240 [2024-05-15 11:12:23.836626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.240 [2024-05-15 11:12:23.845647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.240 [2024-05-15 11:12:23.846300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.846643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.240 [2024-05-15 11:12:23.846658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.240 [2024-05-15 11:12:23.846668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.241 [2024-05-15 11:12:23.846928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.241 [2024-05-15 11:12:23.847173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.241 [2024-05-15 11:12:23.847182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.241 [2024-05-15 11:12:23.847190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.241 [2024-05-15 11:12:23.851105] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.241 [2024-05-15 11:12:23.859871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.241 [2024-05-15 11:12:23.860521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.241 [2024-05-15 11:12:23.860806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.241 [2024-05-15 11:12:23.860821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.241 [2024-05-15 11:12:23.860831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.241 [2024-05-15 11:12:23.861092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.241 [2024-05-15 11:12:23.861338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.241 [2024-05-15 11:12:23.861348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.241 [2024-05-15 11:12:23.861355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.241 [2024-05-15 11:12:23.865281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.241 [2024-05-15 11:12:23.874046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.241 [2024-05-15 11:12:23.874598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.241 [2024-05-15 11:12:23.874960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.241 [2024-05-15 11:12:23.874971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.241 [2024-05-15 11:12:23.874979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.241 [2024-05-15 11:12:23.875219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.241 [2024-05-15 11:12:23.875460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.241 [2024-05-15 11:12:23.875469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.241 [2024-05-15 11:12:23.875475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.241 [2024-05-15 11:12:23.879385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.241 [2024-05-15 11:12:23.888377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.241 [2024-05-15 11:12:23.888987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.241 [2024-05-15 11:12:23.889274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.241 [2024-05-15 11:12:23.889288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.241 [2024-05-15 11:12:23.889298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.241 [2024-05-15 11:12:23.889565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.241 [2024-05-15 11:12:23.889811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.241 [2024-05-15 11:12:23.889821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.241 [2024-05-15 11:12:23.889828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.893743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:23.902745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:23.903332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.903714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.903725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:23.903734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:23.903975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:23.904215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.504 [2024-05-15 11:12:23.904224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.504 [2024-05-15 11:12:23.904231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.908137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:23.917129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:23.917841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.918173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.918191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:23.918201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:23.918461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:23.918714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.504 [2024-05-15 11:12:23.918724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.504 [2024-05-15 11:12:23.918732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.922646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:23.931409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:23.932071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.932443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.932457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:23.932466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:23.932734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:23.932980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.504 [2024-05-15 11:12:23.932988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.504 [2024-05-15 11:12:23.932996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.936911] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:23.945677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:23.946352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.946692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.946707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:23.946717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:23.946977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:23.947222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.504 [2024-05-15 11:12:23.947232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.504 [2024-05-15 11:12:23.947239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.951154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:23.959921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:23.960587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.960975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.960988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:23.961002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:23.961262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:23.961507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.504 [2024-05-15 11:12:23.961516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.504 [2024-05-15 11:12:23.961524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.965449] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:23.974214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:23.974738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.975081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.975092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:23.975100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:23.975340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:23.975584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.504 [2024-05-15 11:12:23.975592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.504 [2024-05-15 11:12:23.975599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.979500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:23.988495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:23.989169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.989538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:23.989560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:23.989570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:23.989831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:23.990075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.504 [2024-05-15 11:12:23.990084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.504 [2024-05-15 11:12:23.990091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.504 [2024-05-15 11:12:23.994003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.504 [2024-05-15 11:12:24.002755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.504 [2024-05-15 11:12:24.003343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:24.003671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.504 [2024-05-15 11:12:24.003683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.504 [2024-05-15 11:12:24.003691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.504 [2024-05-15 11:12:24.003937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.504 [2024-05-15 11:12:24.004178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.004188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.004195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.008105] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.017097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.017834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.018162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.018176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.018186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.018446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.018700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.018710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.018718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.022628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.031417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.031948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.032312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.032326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.032335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.032604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.032849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.032859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.032866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.036779] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.045780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.046331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.046617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.046629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.046637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.046878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.047125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.047134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.047141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.051077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.060075] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.060625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.060999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.061013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.061022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.061282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.061527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.061536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.061544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.065471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.074468] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.075152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.075480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.075493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.075503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.075772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.076017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.076027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.076034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.079944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.088708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.089362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.089734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.089751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.089760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.090020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.090265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.090279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.090286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.094204] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.102968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.103655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.104026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.104040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.104049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.104310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.104561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.104570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.104578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.108488] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.117251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.117882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.118253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.118266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.118276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.118536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.118791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.118800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.118808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.122720] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.131481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.505 [2024-05-15 11:12:24.132120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.132448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.505 [2024-05-15 11:12:24.132461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.505 [2024-05-15 11:12:24.132471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.505 [2024-05-15 11:12:24.132740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.505 [2024-05-15 11:12:24.132984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.505 [2024-05-15 11:12:24.132993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.505 [2024-05-15 11:12:24.133008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.505 [2024-05-15 11:12:24.136921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.505 [2024-05-15 11:12:24.145692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.506 [2024-05-15 11:12:24.146374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.506 [2024-05-15 11:12:24.146716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.506 [2024-05-15 11:12:24.146731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.506 [2024-05-15 11:12:24.146741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.506 [2024-05-15 11:12:24.147002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.506 [2024-05-15 11:12:24.147246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.506 [2024-05-15 11:12:24.147255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.506 [2024-05-15 11:12:24.147262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.506 [2024-05-15 11:12:24.151176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.768 [2024-05-15 11:12:24.159942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.768 [2024-05-15 11:12:24.160633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-05-15 11:12:24.160960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-05-15 11:12:24.160974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.768 [2024-05-15 11:12:24.160984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.768 [2024-05-15 11:12:24.161244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.768 [2024-05-15 11:12:24.161489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.768 [2024-05-15 11:12:24.161497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.768 [2024-05-15 11:12:24.161505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.768 [2024-05-15 11:12:24.165434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.768 [2024-05-15 11:12:24.174200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.768 [2024-05-15 11:12:24.174895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-05-15 11:12:24.175260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-05-15 11:12:24.175274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.768 [2024-05-15 11:12:24.175283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.768 [2024-05-15 11:12:24.175544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.768 [2024-05-15 11:12:24.175798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.768 [2024-05-15 11:12:24.175808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.768 [2024-05-15 11:12:24.175815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.768 [2024-05-15 11:12:24.179733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.768 [2024-05-15 11:12:24.188496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.768 [2024-05-15 11:12:24.189164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-05-15 11:12:24.189531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-05-15 11:12:24.189553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.768 [2024-05-15 11:12:24.189563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.768 [2024-05-15 11:12:24.189823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.768 [2024-05-15 11:12:24.190068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.768 [2024-05-15 11:12:24.190077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.190084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.193996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.202761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.203435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.203749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.203764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.203775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.204035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.204279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.204289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.204296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.208212] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.216976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.217557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.217891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.217902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.217910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.218151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.218392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.218401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.218408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.222320] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.231311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.231987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.232362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.232376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.232385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.232653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.232899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.232907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.232915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.236828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.245588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.246260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.246633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.246647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.246657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.246917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.247162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.247170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.247178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.251099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.259900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.260541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.260914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.260928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.260937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.261197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.261441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.261450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.261457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.265383] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.274147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.274814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.275029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.275044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.275054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.275314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.275568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.275578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.275585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.279495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.288490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.289176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.289499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.289512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.289522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.289791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.290036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.290045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.290052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.293964] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.302728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.303396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.303750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.303765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.303775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.304035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.304279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.304288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.304296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.308212] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.316979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.317654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.318030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.318048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.318058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.769 [2024-05-15 11:12:24.318318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.769 [2024-05-15 11:12:24.318572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.769 [2024-05-15 11:12:24.318581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.769 [2024-05-15 11:12:24.318589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.769 [2024-05-15 11:12:24.322499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.769 [2024-05-15 11:12:24.331265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.769 [2024-05-15 11:12:24.331877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.332208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-05-15 11:12:24.332222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.769 [2024-05-15 11:12:24.332232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.770 [2024-05-15 11:12:24.332493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.770 [2024-05-15 11:12:24.332745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.770 [2024-05-15 11:12:24.332758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.770 [2024-05-15 11:12:24.332766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.770 [2024-05-15 11:12:24.336679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.770 [2024-05-15 11:12:24.345526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.770 [2024-05-15 11:12:24.346200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.346579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.346594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.770 [2024-05-15 11:12:24.346604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.770 [2024-05-15 11:12:24.346864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.770 [2024-05-15 11:12:24.347108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.770 [2024-05-15 11:12:24.347118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.770 [2024-05-15 11:12:24.347125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.770 [2024-05-15 11:12:24.351043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.770 [2024-05-15 11:12:24.359817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.770 [2024-05-15 11:12:24.360399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.360580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.360591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.770 [2024-05-15 11:12:24.360604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.770 [2024-05-15 11:12:24.360845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.770 [2024-05-15 11:12:24.361087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.770 [2024-05-15 11:12:24.361097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.770 [2024-05-15 11:12:24.361104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.770 [2024-05-15 11:12:24.365022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.770 [2024-05-15 11:12:24.374014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.770 [2024-05-15 11:12:24.374646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.375014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.375027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.770 [2024-05-15 11:12:24.375036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.770 [2024-05-15 11:12:24.375297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.770 [2024-05-15 11:12:24.375541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.770 [2024-05-15 11:12:24.375558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.770 [2024-05-15 11:12:24.375566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.770 [2024-05-15 11:12:24.379570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.770 [2024-05-15 11:12:24.388337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.770 [2024-05-15 11:12:24.388993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.389371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.389384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.770 [2024-05-15 11:12:24.389393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.770 [2024-05-15 11:12:24.389662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.770 [2024-05-15 11:12:24.389907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.770 [2024-05-15 11:12:24.389916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.770 [2024-05-15 11:12:24.389923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.770 [2024-05-15 11:12:24.393834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.770 [2024-05-15 11:12:24.402599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.770 [2024-05-15 11:12:24.403134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.403506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.403519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.770 [2024-05-15 11:12:24.403528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.770 [2024-05-15 11:12:24.403803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.770 [2024-05-15 11:12:24.404050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.770 [2024-05-15 11:12:24.404059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.770 [2024-05-15 11:12:24.404067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.770 [2024-05-15 11:12:24.407979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.770 [2024-05-15 11:12:24.416970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.770 [2024-05-15 11:12:24.417646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.418021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-05-15 11:12:24.418034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:27.770 [2024-05-15 11:12:24.418044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:27.770 [2024-05-15 11:12:24.418304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:27.770 [2024-05-15 11:12:24.418557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.770 [2024-05-15 11:12:24.418566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.770 [2024-05-15 11:12:24.418574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.032 [2024-05-15 11:12:24.422486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.032 [2024-05-15 11:12:24.431252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.032 [2024-05-15 11:12:24.431889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.032 [2024-05-15 11:12:24.432249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.032 [2024-05-15 11:12:24.432263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.032 [2024-05-15 11:12:24.432272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.032 [2024-05-15 11:12:24.432533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.032 [2024-05-15 11:12:24.432785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.032 [2024-05-15 11:12:24.432795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.032 [2024-05-15 11:12:24.432803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.032 [2024-05-15 11:12:24.436715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.032 [2024-05-15 11:12:24.445479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.032 [2024-05-15 11:12:24.446162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.032 [2024-05-15 11:12:24.446486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.032 [2024-05-15 11:12:24.446500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.032 [2024-05-15 11:12:24.446509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.032 [2024-05-15 11:12:24.446777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.032 [2024-05-15 11:12:24.447026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.032 [2024-05-15 11:12:24.447036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.032 [2024-05-15 11:12:24.447043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.032 [2024-05-15 11:12:24.450954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.032 [2024-05-15 11:12:24.459723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.032 [2024-05-15 11:12:24.460272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.032 [2024-05-15 11:12:24.460463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.032 [2024-05-15 11:12:24.460474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.032 [2024-05-15 11:12:24.460482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.032 [2024-05-15 11:12:24.460730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.032 [2024-05-15 11:12:24.460973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.460981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.460989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.464931] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.473930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.474604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.474904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.474918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.474927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.475187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.475431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.475440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.475448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.479364] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.488137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.488840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.489074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.489088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.489098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.489359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.489610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.489621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.489633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.493550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.502315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.502955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.503328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.503342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.503351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.503619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.503865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.503874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.503882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.507795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.516559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.517177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.517556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.517570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.517579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.517839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.518084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.518093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.518101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.522012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.530772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.531402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.531739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.531754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.531764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.532024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.532269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.532278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.532285] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.536207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.544971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.545645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.545984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.545997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.546006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.546267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.546511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.546520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.546528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.550447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.559220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.559921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.560292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.560306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.560315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.560580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.560825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.560834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.560842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.564770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.573530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.033 [2024-05-15 11:12:24.574202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.574529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.033 [2024-05-15 11:12:24.574543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.033 [2024-05-15 11:12:24.574560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.033 [2024-05-15 11:12:24.574821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.033 [2024-05-15 11:12:24.575065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.033 [2024-05-15 11:12:24.575074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.033 [2024-05-15 11:12:24.575082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.033 [2024-05-15 11:12:24.578994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.033 [2024-05-15 11:12:24.587761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.034 [2024-05-15 11:12:24.588415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.588784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.588800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.034 [2024-05-15 11:12:24.588810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.034 [2024-05-15 11:12:24.589070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.034 [2024-05-15 11:12:24.589315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.034 [2024-05-15 11:12:24.589325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.034 [2024-05-15 11:12:24.589333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.034 [2024-05-15 11:12:24.593250] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.034 [2024-05-15 11:12:24.602016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.034 [2024-05-15 11:12:24.602646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.602903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.602916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.034 [2024-05-15 11:12:24.602926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.034 [2024-05-15 11:12:24.603186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.034 [2024-05-15 11:12:24.603430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.034 [2024-05-15 11:12:24.603440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.034 [2024-05-15 11:12:24.603448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.034 [2024-05-15 11:12:24.607366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.034 [2024-05-15 11:12:24.616363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.034 [2024-05-15 11:12:24.617001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.617345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.617358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.034 [2024-05-15 11:12:24.617367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.034 [2024-05-15 11:12:24.617635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.034 [2024-05-15 11:12:24.617879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.034 [2024-05-15 11:12:24.617889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.034 [2024-05-15 11:12:24.617896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.034 [2024-05-15 11:12:24.621809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.034 [2024-05-15 11:12:24.630572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.034 [2024-05-15 11:12:24.631255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.631578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.631593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.034 [2024-05-15 11:12:24.631603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.034 [2024-05-15 11:12:24.631863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.034 [2024-05-15 11:12:24.632108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.034 [2024-05-15 11:12:24.632117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.034 [2024-05-15 11:12:24.632124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.034 [2024-05-15 11:12:24.636041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.034 [2024-05-15 11:12:24.644807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.034 [2024-05-15 11:12:24.645390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.645710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.645722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.034 [2024-05-15 11:12:24.645730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.034 [2024-05-15 11:12:24.645971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.034 [2024-05-15 11:12:24.646212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.034 [2024-05-15 11:12:24.646220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.034 [2024-05-15 11:12:24.646227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.034 [2024-05-15 11:12:24.650136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.034 [2024-05-15 11:12:24.659124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.034 [2024-05-15 11:12:24.659783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.660153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.660166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.034 [2024-05-15 11:12:24.660175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.034 [2024-05-15 11:12:24.660435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.034 [2024-05-15 11:12:24.660687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.034 [2024-05-15 11:12:24.660696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.034 [2024-05-15 11:12:24.660704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.034 [2024-05-15 11:12:24.664633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.034 [2024-05-15 11:12:24.673429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.034 [2024-05-15 11:12:24.674117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.674451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.034 [2024-05-15 11:12:24.674469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.034 [2024-05-15 11:12:24.674479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.034 [2024-05-15 11:12:24.674748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.034 [2024-05-15 11:12:24.674993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.034 [2024-05-15 11:12:24.675002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.034 [2024-05-15 11:12:24.675010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.034 [2024-05-15 11:12:24.678921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.296 [2024-05-15 11:12:24.687688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.296 [2024-05-15 11:12:24.688347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.688727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.688742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.296 [2024-05-15 11:12:24.688752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.296 [2024-05-15 11:12:24.689012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.296 [2024-05-15 11:12:24.689256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.296 [2024-05-15 11:12:24.689265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.296 [2024-05-15 11:12:24.689273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.296 [2024-05-15 11:12:24.693190] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.296 [2024-05-15 11:12:24.701955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.296 [2024-05-15 11:12:24.702636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.702996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.703009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.296 [2024-05-15 11:12:24.703019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.296 [2024-05-15 11:12:24.703279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.296 [2024-05-15 11:12:24.703523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.296 [2024-05-15 11:12:24.703532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.296 [2024-05-15 11:12:24.703539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.296 [2024-05-15 11:12:24.707460] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.296 [2024-05-15 11:12:24.716221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.296 [2024-05-15 11:12:24.716874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.717244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.717258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.296 [2024-05-15 11:12:24.717272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.296 [2024-05-15 11:12:24.717532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.296 [2024-05-15 11:12:24.717785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.296 [2024-05-15 11:12:24.717795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.296 [2024-05-15 11:12:24.717803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.296 [2024-05-15 11:12:24.721716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.296 [2024-05-15 11:12:24.730489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.296 [2024-05-15 11:12:24.731164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.731495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.731508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.296 [2024-05-15 11:12:24.731518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.296 [2024-05-15 11:12:24.731786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.296 [2024-05-15 11:12:24.732032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.296 [2024-05-15 11:12:24.732041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.296 [2024-05-15 11:12:24.732048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.296 [2024-05-15 11:12:24.735958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.296 [2024-05-15 11:12:24.744722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.296 [2024-05-15 11:12:24.745394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.745725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.745740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.296 [2024-05-15 11:12:24.745750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.296 [2024-05-15 11:12:24.746010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.296 [2024-05-15 11:12:24.746256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.296 [2024-05-15 11:12:24.746264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.296 [2024-05-15 11:12:24.746272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.296 [2024-05-15 11:12:24.750189] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.296 [2024-05-15 11:12:24.758953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.296 [2024-05-15 11:12:24.759645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.759981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.296 [2024-05-15 11:12:24.759995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.296 [2024-05-15 11:12:24.760005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.296 [2024-05-15 11:12:24.760269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.296 [2024-05-15 11:12:24.760514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.296 [2024-05-15 11:12:24.760522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.296 [2024-05-15 11:12:24.760530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.296 [2024-05-15 11:12:24.764461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.296 [2024-05-15 11:12:24.773233] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.296 [2024-05-15 11:12:24.773938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.774263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.774277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.774286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.774555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.774800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.774809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.774816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.778729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.787489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.788132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.788461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.788474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.788484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.788752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.788997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.789006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.789013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.792928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.801692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.802371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.802689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.802705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.802714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.802975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.803228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.803237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.803245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.807162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.815928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.816515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.816853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.816867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.816877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.817137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.817382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.817391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.817398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.821312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.830307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.830943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.831310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.831323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.831333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.831602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.831847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.831856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.831864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.835774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.844532] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.845204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.845532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.845553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.845563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.845824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.846068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.846081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.846089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.850000] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.858762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.859431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.859774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.859789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.859799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.860059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.860304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.860313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.860320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.864245] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.873017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.873590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.873975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.873989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.873998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.874258] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.874503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.874511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.874518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.878435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.887226] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.887873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.888198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.888211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.888221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.888481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.888734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.888744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.888756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.297 [2024-05-15 11:12:24.892672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.297 [2024-05-15 11:12:24.901433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.297 [2024-05-15 11:12:24.902072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.902448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.297 [2024-05-15 11:12:24.902461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.297 [2024-05-15 11:12:24.902471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.297 [2024-05-15 11:12:24.902740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.297 [2024-05-15 11:12:24.902985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.297 [2024-05-15 11:12:24.902993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.297 [2024-05-15 11:12:24.903001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.298 [2024-05-15 11:12:24.906911] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.298 [2024-05-15 11:12:24.915677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.298 [2024-05-15 11:12:24.916313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.298 [2024-05-15 11:12:24.916660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.298 [2024-05-15 11:12:24.916675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.298 [2024-05-15 11:12:24.916685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.298 [2024-05-15 11:12:24.916945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.298 [2024-05-15 11:12:24.917190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.298 [2024-05-15 11:12:24.917199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.298 [2024-05-15 11:12:24.917207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.298 [2024-05-15 11:12:24.921123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.298 [2024-05-15 11:12:24.929887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.298 [2024-05-15 11:12:24.930588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.298 [2024-05-15 11:12:24.930907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.298 [2024-05-15 11:12:24.930920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.298 [2024-05-15 11:12:24.930930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.298 [2024-05-15 11:12:24.931191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.298 [2024-05-15 11:12:24.931435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.298 [2024-05-15 11:12:24.931444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.298 [2024-05-15 11:12:24.931451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.298 [2024-05-15 11:12:24.935370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.298 [2024-05-15 11:12:24.944137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.298 [2024-05-15 11:12:24.944799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.298 [2024-05-15 11:12:24.945130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.298 [2024-05-15 11:12:24.945143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.298 [2024-05-15 11:12:24.945152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.298 [2024-05-15 11:12:24.945412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.298 [2024-05-15 11:12:24.945664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.298 [2024-05-15 11:12:24.945673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.298 [2024-05-15 11:12:24.945681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:24.949598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:24.958363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:24.958923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:24.959224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:24.959234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:24.959242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:24.959483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:24.959730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:24.959739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:24.959746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:24.963664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:24.972662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:24.973333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:24.973609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:24.973625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:24.973635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:24.973896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:24.974140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:24.974150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:24.974158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:24.978082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:24.986865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:24.987412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:24.987823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:24.987860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:24.987871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:24.988131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:24.988376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:24.988385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:24.988393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:24.992510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.001070] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.001685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.002019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.002032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.002042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.002302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.002556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.002566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.002573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.006483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.015249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.015904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.016257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.016270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.016280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.016540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.016793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.016802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.016809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.020724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.029525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.030076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.030450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.030463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.030473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.030742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.030987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.030996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.031003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.034919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.043923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.044475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.044746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.044759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.044767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.045008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.045250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.045258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.045265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.049209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.058218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.058768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.059075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.059086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.059093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.059334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.059580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.059589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.059596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.063517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.072525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.072976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.073304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.073319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.073327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.073574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.073816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.073824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.073831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.077745] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.086754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.087329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.087616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.087631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.087639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.087879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.088121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.088130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.088137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.092052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.101059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.101605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.101913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.101924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.101931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.102172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.102412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.102421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.102428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.106343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.115349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.115838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.116151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.116161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.116173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.116413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.116660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.116670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.116677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.561 [2024-05-15 11:12:25.120594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.561 [2024-05-15 11:12:25.129599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.561 [2024-05-15 11:12:25.130269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.130491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-05-15 11:12:25.130505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.561 [2024-05-15 11:12:25.130514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.561 [2024-05-15 11:12:25.130783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.561 [2024-05-15 11:12:25.131028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.561 [2024-05-15 11:12:25.131037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.561 [2024-05-15 11:12:25.131045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.562 [2024-05-15 11:12:25.134963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.562 [2024-05-15 11:12:25.143974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.562 [2024-05-15 11:12:25.144537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.144851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.144862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.562 [2024-05-15 11:12:25.144869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.562 [2024-05-15 11:12:25.145110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.562 [2024-05-15 11:12:25.145352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.562 [2024-05-15 11:12:25.145361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.562 [2024-05-15 11:12:25.145368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.562 [2024-05-15 11:12:25.149284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.562 [2024-05-15 11:12:25.158294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.562 [2024-05-15 11:12:25.158973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.159342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.159355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.562 [2024-05-15 11:12:25.159365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.562 [2024-05-15 11:12:25.159636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.562 [2024-05-15 11:12:25.159882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.562 [2024-05-15 11:12:25.159892] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.562 [2024-05-15 11:12:25.159899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.562 [2024-05-15 11:12:25.163830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.562 [2024-05-15 11:12:25.172612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.562 [2024-05-15 11:12:25.173195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.173523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.173534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.562 [2024-05-15 11:12:25.173542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.562 [2024-05-15 11:12:25.173789] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.562 [2024-05-15 11:12:25.174031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.562 [2024-05-15 11:12:25.174039] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.562 [2024-05-15 11:12:25.174046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.562 [2024-05-15 11:12:25.177959] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.562 [2024-05-15 11:12:25.186967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.562 [2024-05-15 11:12:25.187505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.187851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.187863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.562 [2024-05-15 11:12:25.187871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.562 [2024-05-15 11:12:25.188111] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.562 [2024-05-15 11:12:25.188352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.562 [2024-05-15 11:12:25.188360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.562 [2024-05-15 11:12:25.188367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.562 [2024-05-15 11:12:25.192280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.562 [2024-05-15 11:12:25.201286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.562 [2024-05-15 11:12:25.201872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.202177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-05-15 11:12:25.202187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.562 [2024-05-15 11:12:25.202195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.562 [2024-05-15 11:12:25.202435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.562 [2024-05-15 11:12:25.202686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.562 [2024-05-15 11:12:25.202696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.562 [2024-05-15 11:12:25.202703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.562 [2024-05-15 11:12:25.206612] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.824 [2024-05-15 11:12:25.215621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.824 [2024-05-15 11:12:25.216196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.216535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.216551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.824 [2024-05-15 11:12:25.216559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.824 [2024-05-15 11:12:25.216800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.824 [2024-05-15 11:12:25.217040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.824 [2024-05-15 11:12:25.217050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.824 [2024-05-15 11:12:25.217057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.824 [2024-05-15 11:12:25.220970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.824 [2024-05-15 11:12:25.229972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.824 [2024-05-15 11:12:25.230647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.230976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.230990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.824 [2024-05-15 11:12:25.230999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.824 [2024-05-15 11:12:25.231259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.824 [2024-05-15 11:12:25.231504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.824 [2024-05-15 11:12:25.231513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.824 [2024-05-15 11:12:25.231521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.824 [2024-05-15 11:12:25.235440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.824 [2024-05-15 11:12:25.244206] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.824 [2024-05-15 11:12:25.244763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.245087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.245098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.824 [2024-05-15 11:12:25.245106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.824 [2024-05-15 11:12:25.245347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.824 [2024-05-15 11:12:25.245594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.824 [2024-05-15 11:12:25.245603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.824 [2024-05-15 11:12:25.245615] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.824 [2024-05-15 11:12:25.249524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.824 [2024-05-15 11:12:25.258524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.824 [2024-05-15 11:12:25.259058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.259387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.824 [2024-05-15 11:12:25.259401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.259411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.259679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.259925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.259934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.259942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.263873] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.272883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.273465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.273671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.273684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.273692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.273933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.274176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.274184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.274192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.278107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.287112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.287668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.288003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.288014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.288021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.288262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.288503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.288511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.288518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.292436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.301472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.301927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.302253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.302263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.302271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.302512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.302759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.302768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.302775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.306719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.315729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.316362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.316701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.316716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.316725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.316985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.317229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.317238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.317246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.321168] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.329950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.330537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.330866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.330877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.330884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.331125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.331366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.331374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.331381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.335293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.344299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.344850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.345164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.345176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.345184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.345425] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.345673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.345684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.345691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.349603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.358608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.359181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.359476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.359488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.359495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.359743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.359985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.359994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.360001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.363926] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.372929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.373375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.373671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.373685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.373692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.373934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.374177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.374185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.825 [2024-05-15 11:12:25.374193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.825 [2024-05-15 11:12:25.378112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.825 [2024-05-15 11:12:25.387122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.825 [2024-05-15 11:12:25.387597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.387882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.825 [2024-05-15 11:12:25.387894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.825 [2024-05-15 11:12:25.387902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.825 [2024-05-15 11:12:25.388143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.825 [2024-05-15 11:12:25.388384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.825 [2024-05-15 11:12:25.388392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.826 [2024-05-15 11:12:25.388399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.826 [2024-05-15 11:12:25.392318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.826 [2024-05-15 11:12:25.401326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.826 [2024-05-15 11:12:25.401862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.402179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.402189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.826 [2024-05-15 11:12:25.402196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.826 [2024-05-15 11:12:25.402437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.826 [2024-05-15 11:12:25.402684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.826 [2024-05-15 11:12:25.402694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.826 [2024-05-15 11:12:25.402701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.826 [2024-05-15 11:12:25.406613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.826 [2024-05-15 11:12:25.415612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.826 [2024-05-15 11:12:25.416152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.416445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.416456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.826 [2024-05-15 11:12:25.416463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.826 [2024-05-15 11:12:25.416710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.826 [2024-05-15 11:12:25.416951] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.826 [2024-05-15 11:12:25.416961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.826 [2024-05-15 11:12:25.416968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.826 [2024-05-15 11:12:25.420881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.826 [2024-05-15 11:12:25.429965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.826 [2024-05-15 11:12:25.430550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.430869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.430883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.826 [2024-05-15 11:12:25.430890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.826 [2024-05-15 11:12:25.431132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.826 [2024-05-15 11:12:25.431373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.826 [2024-05-15 11:12:25.431381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.826 [2024-05-15 11:12:25.431388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.826 [2024-05-15 11:12:25.435303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.826 [2024-05-15 11:12:25.444308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.826 [2024-05-15 11:12:25.444889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.445220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.445230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.826 [2024-05-15 11:12:25.445238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.826 [2024-05-15 11:12:25.445478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.826 [2024-05-15 11:12:25.445725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.826 [2024-05-15 11:12:25.445733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.826 [2024-05-15 11:12:25.445740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.826 [2024-05-15 11:12:25.449657] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.826 [2024-05-15 11:12:25.458667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.826 [2024-05-15 11:12:25.459112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.459447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.459457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.826 [2024-05-15 11:12:25.459464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.826 [2024-05-15 11:12:25.459710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.826 [2024-05-15 11:12:25.459951] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.826 [2024-05-15 11:12:25.459959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.826 [2024-05-15 11:12:25.459966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.826 [2024-05-15 11:12:25.463887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.826 [2024-05-15 11:12:25.472895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.826 [2024-05-15 11:12:25.473472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.473740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.826 [2024-05-15 11:12:25.473753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:28.826 [2024-05-15 11:12:25.473768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:28.826 [2024-05-15 11:12:25.474009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:28.826 [2024-05-15 11:12:25.474250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.826 [2024-05-15 11:12:25.474260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.826 [2024-05-15 11:12:25.474267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.088 [2024-05-15 11:12:25.478185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.088 [2024-05-15 11:12:25.487188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.088 [2024-05-15 11:12:25.487761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.488090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.488101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.088 [2024-05-15 11:12:25.488109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.088 [2024-05-15 11:12:25.488350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.088 [2024-05-15 11:12:25.488595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.088 [2024-05-15 11:12:25.488604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.088 [2024-05-15 11:12:25.488611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.088 [2024-05-15 11:12:25.492520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.088 [2024-05-15 11:12:25.501526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.088 [2024-05-15 11:12:25.502108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.502270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.502282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.088 [2024-05-15 11:12:25.502289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.088 [2024-05-15 11:12:25.502530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.088 [2024-05-15 11:12:25.502784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.088 [2024-05-15 11:12:25.502795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.088 [2024-05-15 11:12:25.502801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.088 [2024-05-15 11:12:25.506740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.088 [2024-05-15 11:12:25.515752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.088 [2024-05-15 11:12:25.516389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.516525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.516539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.088 [2024-05-15 11:12:25.516556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.088 [2024-05-15 11:12:25.516823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.088 [2024-05-15 11:12:25.517067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.088 [2024-05-15 11:12:25.517076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.088 [2024-05-15 11:12:25.517084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.088 [2024-05-15 11:12:25.521004] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.088 [2024-05-15 11:12:25.530014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.088 [2024-05-15 11:12:25.530603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.530953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.530965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.088 [2024-05-15 11:12:25.530973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.088 [2024-05-15 11:12:25.531214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.088 [2024-05-15 11:12:25.531455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.088 [2024-05-15 11:12:25.531463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.088 [2024-05-15 11:12:25.531470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.088 [2024-05-15 11:12:25.535385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.088 [2024-05-15 11:12:25.544394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.088 [2024-05-15 11:12:25.544937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.545238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-05-15 11:12:25.545249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.545256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.545497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.545743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.545751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.545758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.549673] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.558682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.559257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.559591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.559603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.559611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.559852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.560096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.560106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.560113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.564038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.573046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.573759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.573993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.574008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.574017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.574278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.574523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.574541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.574556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.578467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.587236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.587882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.588251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.588264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.588273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.588534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.588784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.588794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.588802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.592719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.601501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.602076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.602400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.602413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.602423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.602692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.602938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.602951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.602959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.606875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.615886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.616470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.616733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.616746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.616753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.616995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.617237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.617246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.617253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.621166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.630170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.630704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.631026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.631036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.631044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.631285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.631526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.631534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.631541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.635455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.644460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.645044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.645374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.645384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.645391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.645639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.089 [2024-05-15 11:12:25.645880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.089 [2024-05-15 11:12:25.645889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.089 [2024-05-15 11:12:25.645899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.089 [2024-05-15 11:12:25.649807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.089 [2024-05-15 11:12:25.658802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.089 [2024-05-15 11:12:25.659333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.659678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-05-15 11:12:25.659689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.089 [2024-05-15 11:12:25.659696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.089 [2024-05-15 11:12:25.659937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.090 [2024-05-15 11:12:25.660178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.090 [2024-05-15 11:12:25.660186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.090 [2024-05-15 11:12:25.660192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.090 [2024-05-15 11:12:25.664114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.090 [2024-05-15 11:12:25.673112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.090 [2024-05-15 11:12:25.673762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.674090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.674103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.090 [2024-05-15 11:12:25.674113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.090 [2024-05-15 11:12:25.674373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.090 [2024-05-15 11:12:25.674626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.090 [2024-05-15 11:12:25.674635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.090 [2024-05-15 11:12:25.674643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.090 [2024-05-15 11:12:25.678559] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.090 [2024-05-15 11:12:25.687319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.090 [2024-05-15 11:12:25.687963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.688334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.688347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.090 [2024-05-15 11:12:25.688356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.090 [2024-05-15 11:12:25.688626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.090 [2024-05-15 11:12:25.688871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.090 [2024-05-15 11:12:25.688880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.090 [2024-05-15 11:12:25.688888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.090 [2024-05-15 11:12:25.692801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.090 [2024-05-15 11:12:25.701566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.090 [2024-05-15 11:12:25.702244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.702609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.702623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.090 [2024-05-15 11:12:25.702632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.090 [2024-05-15 11:12:25.702893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.090 [2024-05-15 11:12:25.703137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.090 [2024-05-15 11:12:25.703146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.090 [2024-05-15 11:12:25.703154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.090 [2024-05-15 11:12:25.707076] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.090 [2024-05-15 11:12:25.715882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.090 [2024-05-15 11:12:25.716572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.716963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.716976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.090 [2024-05-15 11:12:25.716985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.090 [2024-05-15 11:12:25.717245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.090 [2024-05-15 11:12:25.717490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.090 [2024-05-15 11:12:25.717498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.090 [2024-05-15 11:12:25.717506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.090 [2024-05-15 11:12:25.721421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.090 [2024-05-15 11:12:25.730186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.090 [2024-05-15 11:12:25.730835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.731196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-05-15 11:12:25.731209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.090 [2024-05-15 11:12:25.731218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.090 [2024-05-15 11:12:25.731479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.090 [2024-05-15 11:12:25.731733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.090 [2024-05-15 11:12:25.731743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.090 [2024-05-15 11:12:25.731750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.090 [2024-05-15 11:12:25.735668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.352 [2024-05-15 11:12:25.744443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.352 [2024-05-15 11:12:25.745124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.745448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.745462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.352 [2024-05-15 11:12:25.745471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.352 [2024-05-15 11:12:25.745741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.352 [2024-05-15 11:12:25.745986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.352 [2024-05-15 11:12:25.745995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.352 [2024-05-15 11:12:25.746003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.352 [2024-05-15 11:12:25.749920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.352 [2024-05-15 11:12:25.758685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.352 [2024-05-15 11:12:25.759316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.759677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.759691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.352 [2024-05-15 11:12:25.759701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.352 [2024-05-15 11:12:25.759961] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.352 [2024-05-15 11:12:25.760205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.352 [2024-05-15 11:12:25.760214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.352 [2024-05-15 11:12:25.760221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.352 [2024-05-15 11:12:25.764147] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.352 [2024-05-15 11:12:25.772911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.352 [2024-05-15 11:12:25.773554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.773986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.774000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.352 [2024-05-15 11:12:25.774009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.352 [2024-05-15 11:12:25.774268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.352 [2024-05-15 11:12:25.774513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.352 [2024-05-15 11:12:25.774522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.352 [2024-05-15 11:12:25.774529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.352 [2024-05-15 11:12:25.778445] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.352 [2024-05-15 11:12:25.787205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.352 [2024-05-15 11:12:25.787839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.788211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.352 [2024-05-15 11:12:25.788224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.352 [2024-05-15 11:12:25.788233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.352 [2024-05-15 11:12:25.788493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.352 [2024-05-15 11:12:25.788748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.788759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.788766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.792676] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.801436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.802073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.802397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.802411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.802420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.802690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.802936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.802945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.802953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.806864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.815631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.816306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.816628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.816643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.816653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.816913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.817157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.817166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.817174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.821088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.829847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.830525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.830756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.830774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.830784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.831044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.831289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.831298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.831305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.835219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.844210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.844887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.845213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.845226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.845236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.845496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.845751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.845763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.845771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.849684] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.858446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.859125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.859358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.859373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.859382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.859652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.859899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.859907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.859915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.863836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.872827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.873464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.873796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.873811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.873825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.874085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.874329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.874338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.874346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.878259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.887023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.887567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.887907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.887921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.887930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.888191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.888435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.888445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.888452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.892375] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.901380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.902046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.902371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.902385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.902395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.902664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.902909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.902918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.902926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.906841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.915609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.916299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.916497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.916512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.353 [2024-05-15 11:12:25.916521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.353 [2024-05-15 11:12:25.916796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.353 [2024-05-15 11:12:25.917043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.353 [2024-05-15 11:12:25.917051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.353 [2024-05-15 11:12:25.917059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.353 [2024-05-15 11:12:25.920999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.353 [2024-05-15 11:12:25.929996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.353 [2024-05-15 11:12:25.930676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.931000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.353 [2024-05-15 11:12:25.931014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.354 [2024-05-15 11:12:25.931023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.354 [2024-05-15 11:12:25.931283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.354 [2024-05-15 11:12:25.931528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.354 [2024-05-15 11:12:25.931536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.354 [2024-05-15 11:12:25.931544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.354 [2024-05-15 11:12:25.935466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.354 [2024-05-15 11:12:25.944229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.354 [2024-05-15 11:12:25.944875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.945092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.945105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.354 [2024-05-15 11:12:25.945114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.354 [2024-05-15 11:12:25.945374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.354 [2024-05-15 11:12:25.945629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.354 [2024-05-15 11:12:25.945640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.354 [2024-05-15 11:12:25.945647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.354 [2024-05-15 11:12:25.949562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.354 [2024-05-15 11:12:25.958554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.354 [2024-05-15 11:12:25.959228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.959464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.959477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.354 [2024-05-15 11:12:25.959487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.354 [2024-05-15 11:12:25.959756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.354 [2024-05-15 11:12:25.960008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.354 [2024-05-15 11:12:25.960017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.354 [2024-05-15 11:12:25.960024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.354 [2024-05-15 11:12:25.963949] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.354 [2024-05-15 11:12:25.972942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.354 [2024-05-15 11:12:25.973530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.973859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.973871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.354 [2024-05-15 11:12:25.973878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.354 [2024-05-15 11:12:25.974119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.354 [2024-05-15 11:12:25.974360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.354 [2024-05-15 11:12:25.974369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.354 [2024-05-15 11:12:25.974376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.354 [2024-05-15 11:12:25.978288] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.354 [2024-05-15 11:12:25.987276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.354 [2024-05-15 11:12:25.987912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.988236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:25.988249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.354 [2024-05-15 11:12:25.988258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.354 [2024-05-15 11:12:25.988518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.354 [2024-05-15 11:12:25.988772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.354 [2024-05-15 11:12:25.988782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.354 [2024-05-15 11:12:25.988789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.354 [2024-05-15 11:12:25.992894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.354 [2024-05-15 11:12:26.001684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.354 [2024-05-15 11:12:26.002234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:26.002537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.354 [2024-05-15 11:12:26.002554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.354 [2024-05-15 11:12:26.002563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.354 [2024-05-15 11:12:26.002804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.354 [2024-05-15 11:12:26.003045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.354 [2024-05-15 11:12:26.003058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.354 [2024-05-15 11:12:26.003065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.616 [2024-05-15 11:12:26.006979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.616 [2024-05-15 11:12:26.015983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.616 [2024-05-15 11:12:26.016592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.616 [2024-05-15 11:12:26.016942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.616 [2024-05-15 11:12:26.016953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.616 [2024-05-15 11:12:26.016961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.616 [2024-05-15 11:12:26.017206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.616 [2024-05-15 11:12:26.017447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.616 [2024-05-15 11:12:26.017456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.616 [2024-05-15 11:12:26.017463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.616 [2024-05-15 11:12:26.021379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.616 [2024-05-15 11:12:26.030381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.616 [2024-05-15 11:12:26.030970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.616 [2024-05-15 11:12:26.031300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.616 [2024-05-15 11:12:26.031311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.616 [2024-05-15 11:12:26.031319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.616 [2024-05-15 11:12:26.031565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.616 [2024-05-15 11:12:26.031807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.616 [2024-05-15 11:12:26.031816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.616 [2024-05-15 11:12:26.031822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.616 [2024-05-15 11:12:26.035735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.616 [2024-05-15 11:12:26.044738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.616 [2024-05-15 11:12:26.045318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.616 [2024-05-15 11:12:26.045638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.045649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.045657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.045897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.046138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.046146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.046157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.050072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.059077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.059670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.060041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.060054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.060064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.060324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.060576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.060586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.060594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.064517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.073284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.073977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.074312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.074325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.074335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.074602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.074847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.074858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.074866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.078783] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.087549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.088229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.088466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.088479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.088490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.088760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.089006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.089016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.089024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.092938] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.101936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.102522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.102860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.102872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.102880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.103120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.103361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.103370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.103377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.107294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.116297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.116829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.117153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.117164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.117171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.117413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.117660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.117669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.117676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.121593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.130621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.131261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.131632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.131647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.131656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.131917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.132161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.132170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.132178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.136087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.144854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.145536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.145898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.145912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.145921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.146181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.146426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.146434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.146442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.150358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.159126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.159770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.160138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.160152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.160162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.160422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.160673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.160683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.160690] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.617 [2024-05-15 11:12:26.164614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.617 [2024-05-15 11:12:26.173377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.617 [2024-05-15 11:12:26.174061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.174431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.617 [2024-05-15 11:12:26.174445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.617 [2024-05-15 11:12:26.174454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.617 [2024-05-15 11:12:26.174723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.617 [2024-05-15 11:12:26.174968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.617 [2024-05-15 11:12:26.174977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.617 [2024-05-15 11:12:26.174985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.618 [2024-05-15 11:12:26.178898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.618 [2024-05-15 11:12:26.187663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.618 [2024-05-15 11:12:26.188319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.188667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.188683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.618 [2024-05-15 11:12:26.188692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.618 [2024-05-15 11:12:26.188953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.618 [2024-05-15 11:12:26.189196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.618 [2024-05-15 11:12:26.189205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.618 [2024-05-15 11:12:26.189212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.618 [2024-05-15 11:12:26.193130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.618 [2024-05-15 11:12:26.201896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.618 [2024-05-15 11:12:26.202429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.202733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.202749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.618 [2024-05-15 11:12:26.202759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.618 [2024-05-15 11:12:26.203019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.618 [2024-05-15 11:12:26.203264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.618 [2024-05-15 11:12:26.203272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.618 [2024-05-15 11:12:26.203279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.618 [2024-05-15 11:12:26.207196] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.618 [2024-05-15 11:12:26.216194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.618 [2024-05-15 11:12:26.216836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.217204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.217217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.618 [2024-05-15 11:12:26.217227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.618 [2024-05-15 11:12:26.217487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.618 [2024-05-15 11:12:26.217740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.618 [2024-05-15 11:12:26.217751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.618 [2024-05-15 11:12:26.217758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.618 [2024-05-15 11:12:26.221672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.618 [2024-05-15 11:12:26.230438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.618 [2024-05-15 11:12:26.231154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.231365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.231380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.618 [2024-05-15 11:12:26.231393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.618 [2024-05-15 11:12:26.231663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.618 [2024-05-15 11:12:26.231908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.618 [2024-05-15 11:12:26.231916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.618 [2024-05-15 11:12:26.231923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.618 [2024-05-15 11:12:26.235835] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.618 [2024-05-15 11:12:26.244829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.618 [2024-05-15 11:12:26.245487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.245753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.245768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.618 [2024-05-15 11:12:26.245779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.618 [2024-05-15 11:12:26.246040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.618 [2024-05-15 11:12:26.246284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.618 [2024-05-15 11:12:26.246295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.618 [2024-05-15 11:12:26.246303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.618 [2024-05-15 11:12:26.250216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.618 [2024-05-15 11:12:26.259214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.618 [2024-05-15 11:12:26.259853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.260180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.618 [2024-05-15 11:12:26.260193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.618 [2024-05-15 11:12:26.260203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.618 [2024-05-15 11:12:26.260463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.618 [2024-05-15 11:12:26.260717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.618 [2024-05-15 11:12:26.260727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.618 [2024-05-15 11:12:26.260734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.618 [2024-05-15 11:12:26.264658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.880 [2024-05-15 11:12:26.273425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.880 [2024-05-15 11:12:26.274105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.274428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.274442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.880 [2024-05-15 11:12:26.274452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.880 [2024-05-15 11:12:26.274724] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.880 [2024-05-15 11:12:26.274970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.880 [2024-05-15 11:12:26.274979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.880 [2024-05-15 11:12:26.274987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.880 [2024-05-15 11:12:26.278899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.880 [2024-05-15 11:12:26.287662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.880 [2024-05-15 11:12:26.288340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.288746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.288760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.880 [2024-05-15 11:12:26.288769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.880 [2024-05-15 11:12:26.289029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.880 [2024-05-15 11:12:26.289273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.880 [2024-05-15 11:12:26.289282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.880 [2024-05-15 11:12:26.289290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.880 [2024-05-15 11:12:26.293202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.880 [2024-05-15 11:12:26.301967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.880 [2024-05-15 11:12:26.302551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.302828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.302838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.880 [2024-05-15 11:12:26.302846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.880 [2024-05-15 11:12:26.303087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.880 [2024-05-15 11:12:26.303327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.880 [2024-05-15 11:12:26.303336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.880 [2024-05-15 11:12:26.303343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.880 [2024-05-15 11:12:26.307250] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.880 [2024-05-15 11:12:26.316239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.880 [2024-05-15 11:12:26.316870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.317239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.880 [2024-05-15 11:12:26.317253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.880 [2024-05-15 11:12:26.317262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.880 [2024-05-15 11:12:26.317522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.880 [2024-05-15 11:12:26.317777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.880 [2024-05-15 11:12:26.317787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.317795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.321707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 501140 Killed "${NVMF_APP[@]}" "$@" 00:26:29.881 11:12:26 -- host/bdevperf.sh@36 -- # tgt_init 00:26:29.881 11:12:26 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:29.881 11:12:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:29.881 11:12:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:29.881 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:26:29.881 [2024-05-15 11:12:26.330568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.331130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 11:12:26 -- nvmf/common.sh@470 -- # nvmfpid=502844 00:26:29.881 [2024-05-15 11:12:26.331438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.331449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.331457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 11:12:26 -- nvmf/common.sh@471 -- # waitforlisten 502844 00:26:29.881 [2024-05-15 11:12:26.331704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 11:12:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:29.881 [2024-05-15 11:12:26.331948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.331957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.331964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 11:12:26 -- common/autotest_common.sh@827 -- # '[' -z 502844 ']' 00:26:29.881 11:12:26 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.881 11:12:26 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:29.881 11:12:26 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.881 11:12:26 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:29.881 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:26:29.881 [2024-05-15 11:12:26.335874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 [2024-05-15 11:12:26.344896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.345538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.345874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.345888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.345898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 [2024-05-15 11:12:26.346158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 [2024-05-15 11:12:26.346402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.346415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.346423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.350341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 [2024-05-15 11:12:26.359108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.359646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.360041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.360055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.360064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 [2024-05-15 11:12:26.360324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 [2024-05-15 11:12:26.360577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.360586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.360594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.364515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 [2024-05-15 11:12:26.373285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.373945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.374318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.374331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.374341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 [2024-05-15 11:12:26.374609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 [2024-05-15 11:12:26.374854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.374863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.374871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.378787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 [2024-05-15 11:12:26.382517] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:29.881 [2024-05-15 11:12:26.382577] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.881 [2024-05-15 11:12:26.387557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.388256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.388645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.388660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.388671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 [2024-05-15 11:12:26.388931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 [2024-05-15 11:12:26.389181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.389191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.389199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.393114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 [2024-05-15 11:12:26.401886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.402355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.402677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.402690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.402699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 [2024-05-15 11:12:26.402942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 [2024-05-15 11:12:26.403183] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.403193] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.403200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.407112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.881 [2024-05-15 11:12:26.416108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.416846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.417216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.417230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.417239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 [2024-05-15 11:12:26.417500] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 [2024-05-15 11:12:26.417753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.417763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.417770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.421686] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 [2024-05-15 11:12:26.430449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.881 [2024-05-15 11:12:26.431143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.431519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.881 [2024-05-15 11:12:26.431532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.881 [2024-05-15 11:12:26.431542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.881 [2024-05-15 11:12:26.431814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.881 [2024-05-15 11:12:26.432059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.881 [2024-05-15 11:12:26.432072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.881 [2024-05-15 11:12:26.432080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.881 [2024-05-15 11:12:26.435995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.881 [2024-05-15 11:12:26.444762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.882 [2024-05-15 11:12:26.445307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.445664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.445679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.882 [2024-05-15 11:12:26.445689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.882 [2024-05-15 11:12:26.445950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.882 [2024-05-15 11:12:26.446194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.882 [2024-05-15 11:12:26.446203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.882 [2024-05-15 11:12:26.446211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.882 [2024-05-15 11:12:26.450128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.882 [2024-05-15 11:12:26.459128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.882 [2024-05-15 11:12:26.459700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.459897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.459908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.882 [2024-05-15 11:12:26.459915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.882 [2024-05-15 11:12:26.460157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.882 [2024-05-15 11:12:26.460397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.882 [2024-05-15 11:12:26.460406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.882 [2024-05-15 11:12:26.460412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.882 [2024-05-15 11:12:26.462435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:29.882 [2024-05-15 11:12:26.464335] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.882 [2024-05-15 11:12:26.473333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.882 [2024-05-15 11:12:26.474009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.474399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.474414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.882 [2024-05-15 11:12:26.474424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.882 [2024-05-15 11:12:26.474692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.882 [2024-05-15 11:12:26.474937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.882 [2024-05-15 11:12:26.474951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.882 [2024-05-15 11:12:26.474959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.882 [2024-05-15 11:12:26.478980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.882 [2024-05-15 11:12:26.487524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.882 [2024-05-15 11:12:26.488180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.488524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.488537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.882 [2024-05-15 11:12:26.488556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.882 [2024-05-15 11:12:26.488818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.882 [2024-05-15 11:12:26.489063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.882 [2024-05-15 11:12:26.489071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.882 [2024-05-15 11:12:26.489079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.882 [2024-05-15 11:12:26.492997] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.882 [2024-05-15 11:12:26.501767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.882 [2024-05-15 11:12:26.502459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.502785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.502800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.882 [2024-05-15 11:12:26.502810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.882 [2024-05-15 11:12:26.503071] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.882 [2024-05-15 11:12:26.503315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.882 [2024-05-15 11:12:26.503323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.882 [2024-05-15 11:12:26.503331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.882 [2024-05-15 11:12:26.507249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.882 [2024-05-15 11:12:26.515421] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.882 [2024-05-15 11:12:26.515446] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.882 [2024-05-15 11:12:26.515452] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.882 [2024-05-15 11:12:26.515457] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.882 [2024-05-15 11:12:26.515460] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.882 [2024-05-15 11:12:26.515565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.882 [2024-05-15 11:12:26.515663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.882 [2024-05-15 11:12:26.515792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.882 [2024-05-15 11:12:26.516016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.882 [2024-05-15 11:12:26.516589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.516960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.516971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.882 [2024-05-15 11:12:26.516979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.882 [2024-05-15 11:12:26.517226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:29.882 [2024-05-15 11:12:26.517468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.882 [2024-05-15 11:12:26.517476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.882 [2024-05-15 11:12:26.517483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.882 [2024-05-15 11:12:26.521431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.882 [2024-05-15 11:12:26.530195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.882 [2024-05-15 11:12:26.530758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.531079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.882 [2024-05-15 11:12:26.531090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:29.882 [2024-05-15 11:12:26.531098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:29.882 [2024-05-15 11:12:26.531340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.144 [2024-05-15 11:12:26.531587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.144 [2024-05-15 11:12:26.531597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.144 [2024-05-15 11:12:26.531605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.144 [2024-05-15 11:12:26.535516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.144 [2024-05-15 11:12:26.544520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.144 [2024-05-15 11:12:26.545225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.144 [2024-05-15 11:12:26.545625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.144 [2024-05-15 11:12:26.545640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.144 [2024-05-15 11:12:26.545650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.144 [2024-05-15 11:12:26.545913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.144 [2024-05-15 11:12:26.546158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.144 [2024-05-15 11:12:26.546167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.144 [2024-05-15 11:12:26.546175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.144 [2024-05-15 11:12:26.550089] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.144 [2024-05-15 11:12:26.558858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.144 [2024-05-15 11:12:26.559461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.144 [2024-05-15 11:12:26.559753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.144 [2024-05-15 11:12:26.559769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.144 [2024-05-15 11:12:26.559777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.144 [2024-05-15 11:12:26.560018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.144 [2024-05-15 11:12:26.560259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.144 [2024-05-15 11:12:26.560268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.144 [2024-05-15 11:12:26.560275] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.144 [2024-05-15 11:12:26.564200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.144 [2024-05-15 11:12:26.573193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.144 [2024-05-15 11:12:26.573892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.144 [2024-05-15 11:12:26.574279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.144 [2024-05-15 11:12:26.574292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.144 [2024-05-15 11:12:26.574302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.144 [2024-05-15 11:12:26.574571] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.144 [2024-05-15 11:12:26.574816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.144 [2024-05-15 11:12:26.574825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.144 [2024-05-15 11:12:26.574833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.144 [2024-05-15 11:12:26.578748] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.144 [2024-05-15 11:12:26.587513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.144 [2024-05-15 11:12:26.588223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.588449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.588462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.588471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.588738] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.588984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.588992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.589000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.592914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.601908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.602502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.602815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.602827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.602839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.603081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.603322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.603330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.603337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.607247] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.616241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.616884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.617263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.617277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.617287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.617552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.617798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.617806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.617814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.621725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.630491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.631192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.631579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.631593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.631603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.631863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.632108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.632117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.632125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.636040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.644809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.645513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.645768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.645783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.645792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.646058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.646303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.646313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.646320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.650234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.659001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.659660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.660024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.660037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.660046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.660307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.660559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.660569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.660577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.664499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.673270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.673823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.674207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.674220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.674230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.674490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.674740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.674750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.674758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.678674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.687674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.688381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.688787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.688802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.688812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.689072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.689321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.689331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.689338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.693254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.702022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.702650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.703001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.703014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.703024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.703284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.703529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.703539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.145 [2024-05-15 11:12:26.703554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.145 [2024-05-15 11:12:26.707466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.145 [2024-05-15 11:12:26.716239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.145 [2024-05-15 11:12:26.716936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.717316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.145 [2024-05-15 11:12:26.717329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.145 [2024-05-15 11:12:26.717339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.145 [2024-05-15 11:12:26.717605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.145 [2024-05-15 11:12:26.717850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.145 [2024-05-15 11:12:26.717860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.146 [2024-05-15 11:12:26.717867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.146 [2024-05-15 11:12:26.721779] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.146 [2024-05-15 11:12:26.730549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.146 [2024-05-15 11:12:26.731256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.731482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.731495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.146 [2024-05-15 11:12:26.731504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.146 [2024-05-15 11:12:26.731772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.146 [2024-05-15 11:12:26.732019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.146 [2024-05-15 11:12:26.732034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.146 [2024-05-15 11:12:26.732042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.146 [2024-05-15 11:12:26.735958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.146 [2024-05-15 11:12:26.744728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.146 [2024-05-15 11:12:26.745433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.745799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.745815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.146 [2024-05-15 11:12:26.745825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.146 [2024-05-15 11:12:26.746085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.146 [2024-05-15 11:12:26.746330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.146 [2024-05-15 11:12:26.746339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.146 [2024-05-15 11:12:26.746346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.146 [2024-05-15 11:12:26.750260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.146 [2024-05-15 11:12:26.759086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.146 [2024-05-15 11:12:26.759680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.759933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.759951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.146 [2024-05-15 11:12:26.759961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.146 [2024-05-15 11:12:26.760223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.146 [2024-05-15 11:12:26.760468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.146 [2024-05-15 11:12:26.760478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.146 [2024-05-15 11:12:26.760486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.146 [2024-05-15 11:12:26.764422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.146 [2024-05-15 11:12:26.773423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.146 [2024-05-15 11:12:26.774073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.774427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.774440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.146 [2024-05-15 11:12:26.774450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.146 [2024-05-15 11:12:26.774718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.146 [2024-05-15 11:12:26.774964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.146 [2024-05-15 11:12:26.774972] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.146 [2024-05-15 11:12:26.774987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.146 [2024-05-15 11:12:26.778899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.146 [2024-05-15 11:12:26.787670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.146 [2024-05-15 11:12:26.788266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.788609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.146 [2024-05-15 11:12:26.788621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.146 [2024-05-15 11:12:26.788628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.146 [2024-05-15 11:12:26.788870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.146 [2024-05-15 11:12:26.789111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.146 [2024-05-15 11:12:26.789119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.146 [2024-05-15 11:12:26.789126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.146 [2024-05-15 11:12:26.793039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.409 [2024-05-15 11:12:26.802033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.409 [2024-05-15 11:12:26.802577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-05-15 11:12:26.802786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-05-15 11:12:26.802799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.409 [2024-05-15 11:12:26.802810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.409 [2024-05-15 11:12:26.803070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.409 [2024-05-15 11:12:26.803315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.409 [2024-05-15 11:12:26.803324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.409 [2024-05-15 11:12:26.803332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.409 [2024-05-15 11:12:26.807253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.409 [2024-05-15 11:12:26.816253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.409 [2024-05-15 11:12:26.816910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-05-15 11:12:26.817246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-05-15 11:12:26.817259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.409 [2024-05-15 11:12:26.817269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.409 [2024-05-15 11:12:26.817530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.409 [2024-05-15 11:12:26.817782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.409 [2024-05-15 11:12:26.817793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.409 [2024-05-15 11:12:26.817800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.409 [2024-05-15 11:12:26.821711] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.409 [2024-05-15 11:12:26.830481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.409 [2024-05-15 11:12:26.831022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-05-15 11:12:26.831402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-05-15 11:12:26.831417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.409 [2024-05-15 11:12:26.831427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.409 [2024-05-15 11:12:26.831695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.831940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.831949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.831956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.835870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.844866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.845513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.845878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.845893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.845902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.846163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.846407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.846416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.846423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.850347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.859118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.859693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.860040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.860051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.860059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.860300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.860541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.860554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.860561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.864484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.873481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.874077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.874417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.874428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.874436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.874682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.874923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.874933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.874940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.878844] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.887837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.888508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.888870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.888885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.888894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.889155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.889399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.889408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.889415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.893332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.902100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.902852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.903093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.903106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.903115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.903377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.903628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.903639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.903647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.907564] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.916329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.917080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.917429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.917443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.917453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.917720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.917966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.917975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.917983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.921895] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.930665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.931220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.931652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.931663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.931671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.931913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.932153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.932163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.932170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.936078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.944843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.945390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.945580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.945591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.945599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.945839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.946082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.946090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.946097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.950005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.959231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.959894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.960127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.960141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.410 [2024-05-15 11:12:26.960156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.410 [2024-05-15 11:12:26.960417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.410 [2024-05-15 11:12:26.960668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.410 [2024-05-15 11:12:26.960678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.410 [2024-05-15 11:12:26.960686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.410 [2024-05-15 11:12:26.964615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.410 [2024-05-15 11:12:26.973615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.410 [2024-05-15 11:12:26.974167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-05-15 11:12:26.974541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:26.974558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.411 [2024-05-15 11:12:26.974566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.411 [2024-05-15 11:12:26.974807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.411 [2024-05-15 11:12:26.975047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.411 [2024-05-15 11:12:26.975056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.411 [2024-05-15 11:12:26.975063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.411 [2024-05-15 11:12:26.978971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.411 [2024-05-15 11:12:26.987965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.411 [2024-05-15 11:12:26.988649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:26.988881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:26.988896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.411 [2024-05-15 11:12:26.988907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.411 [2024-05-15 11:12:26.989167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.411 [2024-05-15 11:12:26.989413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.411 [2024-05-15 11:12:26.989421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.411 [2024-05-15 11:12:26.989429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.411 [2024-05-15 11:12:26.993561] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.411 [2024-05-15 11:12:27.002343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.411 [2024-05-15 11:12:27.002891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.003242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.003256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.411 [2024-05-15 11:12:27.003265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.411 [2024-05-15 11:12:27.003531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.411 [2024-05-15 11:12:27.003784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.411 [2024-05-15 11:12:27.003793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.411 [2024-05-15 11:12:27.003801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.411 [2024-05-15 11:12:27.007716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.411 [2024-05-15 11:12:27.016717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.411 [2024-05-15 11:12:27.017423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.017792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.017808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.411 [2024-05-15 11:12:27.017817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.411 [2024-05-15 11:12:27.018078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.411 [2024-05-15 11:12:27.018323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.411 [2024-05-15 11:12:27.018333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.411 [2024-05-15 11:12:27.018340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.411 [2024-05-15 11:12:27.022257] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.411 [2024-05-15 11:12:27.031025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.411 [2024-05-15 11:12:27.031576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.031865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.031876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.411 [2024-05-15 11:12:27.031885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.411 [2024-05-15 11:12:27.032128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.411 [2024-05-15 11:12:27.032370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.411 [2024-05-15 11:12:27.032379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.411 [2024-05-15 11:12:27.032386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.411 [2024-05-15 11:12:27.036300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.411 [2024-05-15 11:12:27.045297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.411 [2024-05-15 11:12:27.045993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.046376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.046390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.411 [2024-05-15 11:12:27.046399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.411 [2024-05-15 11:12:27.046666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.411 [2024-05-15 11:12:27.046916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.411 [2024-05-15 11:12:27.046925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.411 [2024-05-15 11:12:27.046933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.411 [2024-05-15 11:12:27.050847] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.411 [2024-05-15 11:12:27.059619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.411 [2024-05-15 11:12:27.060172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.060476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-05-15 11:12:27.060487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.411 [2024-05-15 11:12:27.060495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.411 [2024-05-15 11:12:27.060743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.060985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.060995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.061002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.064917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 [2024-05-15 11:12:27.073911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.074582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.074833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.074846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.074856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.673 [2024-05-15 11:12:27.075118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.075363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.075374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.075383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.079307] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 [2024-05-15 11:12:27.088308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.088882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.089078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.089088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.089096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.673 [2024-05-15 11:12:27.089337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.089585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.089599] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.089607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.093517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 [2024-05-15 11:12:27.102517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.102959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.103283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.103295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.103304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.673 [2024-05-15 11:12:27.103549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.103792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.103802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.103809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.107719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 [2024-05-15 11:12:27.116719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.117301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.117654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.117666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.117674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.673 [2024-05-15 11:12:27.117915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.118156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.118165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.118172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.122081] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 [2024-05-15 11:12:27.131080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.131666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.131898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.131908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.131917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.673 [2024-05-15 11:12:27.132158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.132399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.132407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.132418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.136329] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 [2024-05-15 11:12:27.145324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.145885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.146242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.146252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.146260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.673 [2024-05-15 11:12:27.146501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.146747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.146756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.146763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.150671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 11:12:27 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:30.673 11:12:27 -- common/autotest_common.sh@860 -- # return 0 00:26:30.673 11:12:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:30.673 11:12:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:30.673 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:30.673 [2024-05-15 11:12:27.159668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.160341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.160633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.160649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.160659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.673 [2024-05-15 11:12:27.160920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.673 [2024-05-15 11:12:27.161165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.673 [2024-05-15 11:12:27.161174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.673 [2024-05-15 11:12:27.161181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.673 [2024-05-15 11:12:27.165105] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.673 [2024-05-15 11:12:27.173905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.673 [2024-05-15 11:12:27.174503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.174842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-05-15 11:12:27.174854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.673 [2024-05-15 11:12:27.174862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.674 [2024-05-15 11:12:27.175103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.674 [2024-05-15 11:12:27.175344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.674 [2024-05-15 11:12:27.175358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.674 [2024-05-15 11:12:27.175365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.674 [2024-05-15 11:12:27.179278] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.674 [2024-05-15 11:12:27.188274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.674 [2024-05-15 11:12:27.188755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.189072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.189085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.674 [2024-05-15 11:12:27.189093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.674 [2024-05-15 11:12:27.189335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.674 [2024-05-15 11:12:27.189581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.674 [2024-05-15 11:12:27.189591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.674 [2024-05-15 11:12:27.189599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.674 [2024-05-15 11:12:27.193507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.674 11:12:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.674 11:12:27 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:30.674 11:12:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.674 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:30.674 [2024-05-15 11:12:27.198317] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.674 [2024-05-15 11:12:27.202501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.674 [2024-05-15 11:12:27.203071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.203415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.203429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.674 11:12:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.674 [2024-05-15 11:12:27.203439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.674 [2024-05-15 11:12:27.203715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.674 11:12:27 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:30.674 [2024-05-15 11:12:27.203961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.674 [2024-05-15 11:12:27.203970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.674 [2024-05-15 11:12:27.203977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.674 11:12:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.674 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:30.674 [2024-05-15 11:12:27.207893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.674 [2024-05-15 11:12:27.216893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.674 [2024-05-15 11:12:27.217480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.217652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.217668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.674 [2024-05-15 11:12:27.217676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.674 [2024-05-15 11:12:27.217917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.674 [2024-05-15 11:12:27.218160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.674 [2024-05-15 11:12:27.218169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.674 [2024-05-15 11:12:27.218175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.674 [2024-05-15 11:12:27.222086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.674 [2024-05-15 11:12:27.231080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.674 [2024-05-15 11:12:27.231668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.232026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.232036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.674 [2024-05-15 11:12:27.232044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.674 [2024-05-15 11:12:27.232285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.674 [2024-05-15 11:12:27.232526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.674 [2024-05-15 11:12:27.232534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.674 [2024-05-15 11:12:27.232541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.674 [2024-05-15 11:12:27.236454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.674 Malloc0 00:26:30.674 11:12:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.674 11:12:27 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.674 11:12:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.674 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:30.674 [2024-05-15 11:12:27.245451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.674 [2024-05-15 11:12:27.246013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.246331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.246342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.674 [2024-05-15 11:12:27.246349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.674 [2024-05-15 11:12:27.246596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.674 [2024-05-15 11:12:27.246837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.674 [2024-05-15 11:12:27.246846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.674 [2024-05-15 11:12:27.246852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.674 [2024-05-15 11:12:27.250756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.674 11:12:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.674 11:12:27 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:30.674 11:12:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.674 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:30.674 [2024-05-15 11:12:27.259745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.674 [2024-05-15 11:12:27.260171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.260479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-05-15 11:12:27.260490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30840 with addr=10.0.0.2, port=4420 00:26:30.674 [2024-05-15 11:12:27.260497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30840 is same with the state(5) to be set 00:26:30.674 [2024-05-15 11:12:27.260743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30840 (9): Bad file descriptor 00:26:30.674 [2024-05-15 11:12:27.260984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.674 [2024-05-15 11:12:27.260993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.674 [2024-05-15 11:12:27.260999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.674 [2024-05-15 11:12:27.264919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.674 11:12:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.674 11:12:27 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.674 11:12:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.674 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:30.674 [2024-05-15 11:12:27.272271] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:30.674 [2024-05-15 11:12:27.272450] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.674 [2024-05-15 11:12:27.274142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.674 11:12:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.674 11:12:27 -- host/bdevperf.sh@38 -- # wait 501757 00:26:30.936 [2024-05-15 11:12:27.353801] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:40.925 00:26:40.925 Latency(us) 00:26:40.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.925 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:40.925 Verification LBA range: start 0x0 length 0x4000 00:26:40.925 Nvme1n1 : 15.01 8395.96 32.80 8732.80 0.00 7446.15 590.51 14090.24 00:26:40.925 =================================================================================================================== 00:26:40.925 Total : 8395.96 32.80 8732.80 0.00 7446.15 590.51 14090.24 00:26:40.925 11:12:35 -- host/bdevperf.sh@39 -- # sync 00:26:40.925 11:12:35 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.925 11:12:35 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.925 11:12:35 -- common/autotest_common.sh@10 -- # set +x 00:26:40.925 11:12:35 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.925 11:12:35 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:40.925 11:12:35 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:40.925 11:12:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:40.925 11:12:35 -- nvmf/common.sh@117 -- # sync 00:26:40.925 11:12:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.926 11:12:35 -- nvmf/common.sh@120 -- # set +e 00:26:40.926 11:12:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.926 11:12:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.926 rmmod nvme_tcp 00:26:40.926 rmmod nvme_fabrics 00:26:40.926 rmmod nvme_keyring 00:26:40.926 11:12:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.926 11:12:36 -- nvmf/common.sh@124 -- # set -e 00:26:40.926 11:12:36 -- nvmf/common.sh@125 -- # return 0 00:26:40.926 11:12:36 -- nvmf/common.sh@478 -- # '[' -n 502844 ']' 00:26:40.926 11:12:36 -- nvmf/common.sh@479 -- # killprocess 502844 00:26:40.926 11:12:36 -- common/autotest_common.sh@946 -- # '[' -z 502844 ']' 00:26:40.926 11:12:36 -- common/autotest_common.sh@950 -- # kill -0 502844 00:26:40.926 11:12:36 -- common/autotest_common.sh@951 -- # uname 00:26:40.926 11:12:36 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:40.926 11:12:36 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 502844 00:26:40.926 11:12:36 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:40.926 11:12:36 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:40.926 11:12:36 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 502844' 00:26:40.926 killing process with pid 502844 00:26:40.926 11:12:36 -- common/autotest_common.sh@965 -- # kill 502844 00:26:40.926 [2024-05-15 11:12:36.096650] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:40.926 11:12:36 -- common/autotest_common.sh@970 -- # wait 502844 00:26:40.926 11:12:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:40.926 11:12:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:40.926 11:12:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:40.926 11:12:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.926 11:12:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:40.926 11:12:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.926 11:12:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.926 11:12:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.869 11:12:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:41.869 00:26:41.869 real 0m27.782s 00:26:41.869 user 1m3.458s 00:26:41.869 sys 0m6.908s 00:26:41.869 11:12:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:41.869 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:26:41.869 ************************************ 00:26:41.869 END TEST nvmf_bdevperf 00:26:41.869 ************************************ 00:26:41.869 11:12:38 -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:41.869 11:12:38 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:41.869 11:12:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.869 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:26:41.869 ************************************ 00:26:41.869 START TEST nvmf_target_disconnect 00:26:41.869 ************************************ 00:26:41.869 11:12:38 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:41.869 * Looking for test storage... 00:26:41.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.869 11:12:38 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.869 11:12:38 -- nvmf/common.sh@7 -- # uname -s 00:26:41.869 11:12:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.869 11:12:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.869 11:12:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.869 11:12:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.869 11:12:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.869 11:12:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.869 11:12:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.869 11:12:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.869 11:12:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.869 11:12:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.869 11:12:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:41.869 11:12:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:41.869 11:12:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.869 11:12:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.869 11:12:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.869 11:12:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.869 11:12:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.869 11:12:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.869 11:12:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.869 11:12:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.869 11:12:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.869 11:12:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.869 11:12:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.869 11:12:38 -- paths/export.sh@5 -- # export PATH 00:26:41.869 11:12:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.869 11:12:38 -- nvmf/common.sh@47 -- # : 0 00:26:41.869 11:12:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.869 11:12:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.869 11:12:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.869 11:12:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.869 11:12:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.869 11:12:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.869 11:12:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.869 11:12:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.869 11:12:38 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:41.869 11:12:38 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:41.869 11:12:38 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:41.869 11:12:38 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:41.869 11:12:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:41.869 11:12:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.869 11:12:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:41.869 11:12:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:41.869 11:12:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:41.869 11:12:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.869 11:12:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.869 11:12:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.869 11:12:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:41.869 11:12:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:41.869 11:12:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.869 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:26:50.039 11:12:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:50.039 11:12:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.039 11:12:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.039 11:12:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.039 11:12:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.039 11:12:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.039 11:12:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.039 11:12:45 -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.039 11:12:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.039 11:12:45 -- nvmf/common.sh@296 -- # e810=() 00:26:50.039 11:12:45 -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.039 11:12:45 -- nvmf/common.sh@297 -- # x722=() 00:26:50.039 11:12:45 -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.039 11:12:45 -- nvmf/common.sh@298 -- # mlx=() 00:26:50.039 11:12:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.039 11:12:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.039 11:12:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.039 11:12:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.039 11:12:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.039 11:12:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.039 11:12:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:50.039 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:50.039 11:12:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.039 11:12:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:50.039 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:50.039 11:12:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.039 11:12:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.039 11:12:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.039 11:12:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.039 11:12:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:50.039 11:12:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.039 11:12:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:50.039 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:50.039 11:12:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.039 11:12:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.039 11:12:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.039 11:12:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:50.039 11:12:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.040 11:12:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:50.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:50.040 11:12:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.040 11:12:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:50.040 11:12:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:50.040 11:12:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:50.040 11:12:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:50.040 11:12:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:50.040 11:12:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.040 11:12:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.040 11:12:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.040 11:12:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.040 11:12:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.040 11:12:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.040 11:12:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.040 11:12:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.040 11:12:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.040 11:12:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.040 11:12:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.040 11:12:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.040 11:12:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.040 11:12:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.040 11:12:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.040 11:12:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.040 11:12:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.040 11:12:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.040 11:12:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.040 11:12:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:26:50.040 00:26:50.040 --- 10.0.0.2 ping statistics --- 00:26:50.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.040 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:26:50.040 11:12:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:26:50.040 00:26:50.040 --- 10.0.0.1 ping statistics --- 00:26:50.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.040 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:50.040 11:12:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.040 11:12:45 -- nvmf/common.sh@411 -- # return 0 00:26:50.040 11:12:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:50.040 11:12:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.040 11:12:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:50.040 11:12:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:50.040 11:12:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.040 11:12:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:50.040 11:12:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:50.040 11:12:45 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:50.040 11:12:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:50.040 11:12:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:50.040 11:12:45 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 ************************************ 00:26:50.040 START TEST nvmf_target_disconnect_tc1 00:26:50.040 ************************************ 00:26:50.040 11:12:45 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:26:50.040 11:12:45 -- host/target_disconnect.sh@32 -- # set +e 00:26:50.040 11:12:45 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.040 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.040 [2024-05-15 11:12:45.647399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.040 [2024-05-15 11:12:45.647804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.040 [2024-05-15 11:12:45.647825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a85b70 with addr=10.0.0.2, port=4420 00:26:50.040 [2024-05-15 11:12:45.647859] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:50.040 [2024-05-15 11:12:45.647872] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:50.040 [2024-05-15 11:12:45.647887] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:50.040 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:50.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:50.040 Initializing NVMe Controllers 00:26:50.040 11:12:45 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:50.040 11:12:45 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:50.040 11:12:45 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:26:50.040 11:12:45 -- common/autotest_common.sh@1149 -- # return 0 00:26:50.040 11:12:45 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:50.040 11:12:45 -- host/target_disconnect.sh@41 -- # set -e 00:26:50.040 00:26:50.040 real 0m0.103s 00:26:50.040 user 0m0.046s 00:26:50.040 sys 0m0.056s 00:26:50.040 11:12:45 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:50.040 11:12:45 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 ************************************ 00:26:50.040 END TEST nvmf_target_disconnect_tc1 00:26:50.040 ************************************ 00:26:50.040 11:12:45 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:50.040 11:12:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:50.040 11:12:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:50.040 11:12:45 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 ************************************ 00:26:50.040 START TEST nvmf_target_disconnect_tc2 00:26:50.040 ************************************ 00:26:50.040 11:12:45 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:26:50.040 11:12:45 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:50.040 11:12:45 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:50.040 11:12:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:50.040 11:12:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:50.040 11:12:45 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 11:12:45 -- nvmf/common.sh@470 -- # nvmfpid=508874 00:26:50.040 11:12:45 -- nvmf/common.sh@471 -- # waitforlisten 508874 00:26:50.040 11:12:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:50.040 11:12:45 -- common/autotest_common.sh@827 -- # '[' -z 508874 ']' 00:26:50.040 11:12:45 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.040 11:12:45 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:50.040 11:12:45 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.040 11:12:45 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:50.040 11:12:45 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 [2024-05-15 11:12:45.803588] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:50.040 [2024-05-15 11:12:45.803643] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.040 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.040 [2024-05-15 11:12:45.889814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.040 [2024-05-15 11:12:45.984327] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.040 [2024-05-15 11:12:45.984377] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.040 [2024-05-15 11:12:45.984385] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.040 [2024-05-15 11:12:45.984392] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.040 [2024-05-15 11:12:45.984398] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.040 [2024-05-15 11:12:45.984569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:50.040 [2024-05-15 11:12:45.984737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:50.040 [2024-05-15 11:12:45.984906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:50.040 [2024-05-15 11:12:45.985180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:50.040 11:12:46 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:50.040 11:12:46 -- common/autotest_common.sh@860 -- # return 0 00:26:50.040 11:12:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:50.040 11:12:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.040 11:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 11:12:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.040 11:12:46 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:50.040 11:12:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.040 11:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 Malloc0 00:26:50.040 11:12:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.040 11:12:46 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:50.040 11:12:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.040 11:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 [2024-05-15 11:12:46.680474] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.040 11:12:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.040 11:12:46 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.040 11:12:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.040 11:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:50.303 11:12:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.303 11:12:46 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.303 11:12:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.303 11:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:50.303 11:12:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.303 11:12:46 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.303 11:12:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.303 11:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:50.303 [2024-05-15 11:12:46.720560] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:50.303 [2024-05-15 11:12:46.720883] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.303 11:12:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.303 11:12:46 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:50.303 11:12:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.303 11:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:50.303 11:12:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.303 11:12:46 -- host/target_disconnect.sh@50 -- # reconnectpid=509221 00:26:50.303 11:12:46 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:50.303 11:12:46 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.303 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.221 11:12:48 -- host/target_disconnect.sh@53 -- # kill -9 508874 00:26:52.221 11:12:48 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Write completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 Read completed with error (sct=0, sc=8) 00:26:52.221 starting I/O failed 00:26:52.221 [2024-05-15 11:12:48.752619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.221 [2024-05-15 11:12:48.752992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.221 [2024-05-15 11:12:48.753185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.221 [2024-05-15 11:12:48.753198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.221 qpair failed and we were unable to recover it. 00:26:52.221 [2024-05-15 11:12:48.753400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.221 [2024-05-15 11:12:48.753828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.221 [2024-05-15 11:12:48.753856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.221 qpair failed and we were unable to recover it. 00:26:52.221 [2024-05-15 11:12:48.754200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.221 [2024-05-15 11:12:48.754503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.754512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.754811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.755105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.755114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.755398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.755715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.755723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.756013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.756345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.756354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.756683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.757006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.757014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.757304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.757574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.757585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.757744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.758021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.758029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.758384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.758650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.758658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.758985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.759286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.759294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.759632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.759971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.759979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.760264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.760578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.760587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.760874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.761136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.761145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.761330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.761629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.761638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.762068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.762366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.762374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.762689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.763010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.763018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.763350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.763669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.763680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.764022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.764375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.764383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.764692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.765001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.765011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.765345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.765573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.765581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.765902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.766218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.766226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.766501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.766864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.766872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.767198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.767485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.767493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.767807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.768144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.768152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.768442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.768786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.768794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.769126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.769417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.769426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.769651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.770006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.770014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.770317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.770608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.770616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.770932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.771255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.771262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.771444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.771715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.771723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.222 qpair failed and we were unable to recover it. 00:26:52.222 [2024-05-15 11:12:48.772017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.772370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.222 [2024-05-15 11:12:48.772378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.772668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.772856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.772863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.773152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.773365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.773373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.773701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.774023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.774030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.774333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.774665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.774673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.774989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.775319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.775327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.775655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.775842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.775849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.776156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.776483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.776490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.776799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.777091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.777098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.777277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.777551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.777559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.777923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.778088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.778096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.778370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.778681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.778689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.778978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.779286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.779294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.779587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.779905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.779912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.780198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.780503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.780511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.780845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.781153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.781161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.781450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.781599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.781608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.781885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.782183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.782191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.782508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.782725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.782732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.783034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.783357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.783364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.783674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.783937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.783945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.784286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.784569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.784578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.784895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.785195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.785202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.785488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.785770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.785777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.786078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.786365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.786373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.786660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.786961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.786969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.787313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.787643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.787650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.788013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.788346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.788354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.788696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.788985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.788992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.789313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.789607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.223 [2024-05-15 11:12:48.789615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.223 qpair failed and we were unable to recover it. 00:26:52.223 [2024-05-15 11:12:48.789937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.790186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.790194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.790479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.790682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.790690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.790957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.791272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.791280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.791596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.792038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.792045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.792263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.792571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.792578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.792884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.793194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.793203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.793516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.793807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.793815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.794157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.794349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.794357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.794674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.795013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.795022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.795198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.795537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.795560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.795880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.796212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.796220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.796512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.796813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.796822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.797162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.797388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.797397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.797720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.798034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.798043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.798341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.798555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.798563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.798780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.799072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.799080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.799368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.799679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.799687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.799976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.800312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.800321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.800608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.800932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.800940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.801232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.801378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.801386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.801707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.801974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.801983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.802290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.802586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.802594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.802902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.803233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.803241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.803532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.803816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.803824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.804180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.804498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.804506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.804808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.804988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.804997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.805299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.805610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.805618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.805958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.806243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.806252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.806576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.806958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.806967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.224 qpair failed and we were unable to recover it. 00:26:52.224 [2024-05-15 11:12:48.807342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.224 [2024-05-15 11:12:48.807677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.807685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.807985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.808278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.808285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.808614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.808924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.808932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.809110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.809393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.809401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.809712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.810002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.810010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.810321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.810608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.810616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.810902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.811249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.811256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.811542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.811835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.811842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.812128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.812428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.812435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.812752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.813079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.813086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.813381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.813698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.813706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.814013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.814329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.814337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.814616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.814915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.814922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.815234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.815552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.815560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.815789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.816120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.816127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.816425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.816713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.816721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.817021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.817347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.817354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.817737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.818017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.818025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.818341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.818645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.818653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.819047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.819329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.819336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.819621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.819950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.819957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.820259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.820572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.820581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.820892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.821214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.821221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.821517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.821803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.821810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.822128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.822439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.822447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.822668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.822964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.822971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.823265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.823579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.823587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.823908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.824201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.824208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.824523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.824840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.824847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.225 qpair failed and we were unable to recover it. 00:26:52.225 [2024-05-15 11:12:48.825133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.225 [2024-05-15 11:12:48.825448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.825455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.825647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.825930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.825937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.826240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.826562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.826571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.826862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.827182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.827189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.827491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.827807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.827814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.828118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.828427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.828434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.828709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.829033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.829041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.829354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.829556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.829564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.829876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.830210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.830218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.830531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.830852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.830860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.831232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.831518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.831526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.831847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.832161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.832168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.832456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.832745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.832752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.833067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.833397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.833405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.833779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.834064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.834071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.834390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.834708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.834715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.835022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.835347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.835353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.835667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.835997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.836005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.836308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.836624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.836631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.836943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.837259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.837267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.837617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.837786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.837793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.838069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.838378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.838385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.838700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.838999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.839007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.226 [2024-05-15 11:12:48.839314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.839632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.226 [2024-05-15 11:12:48.839640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.226 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.839933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.840264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.840273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.840575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.840927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.840935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.841236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.841550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.841557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.841921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.842205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.842214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.842534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.842829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.842837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.843213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.843500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.843509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.843818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.844135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.844142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.844443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.844728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.844736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.845051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.845327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.845335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.845630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.845946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.845953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.846259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.846572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.846581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.846812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.847143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.847150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.847470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.847790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.847798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.848102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.848426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.848433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.848743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.848986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.848994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.849367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.849656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.849666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.849868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.850173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.850181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.850488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.850783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.850790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.851078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.851327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.851334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.851637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.851956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.851963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.852267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.852586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.852594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.852755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.853053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.853061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.853364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.853674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.853681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.853994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.854317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.854325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.854618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.854769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.854777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.855070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.855393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.855402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.855705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.856009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.856016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.856303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.856498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.856505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.856805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.856987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.856994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.227 qpair failed and we were unable to recover it. 00:26:52.227 [2024-05-15 11:12:48.857171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.227 [2024-05-15 11:12:48.857458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.857465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.857775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.858090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.858098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.858409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.858713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.858720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.859039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.859374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.859381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.859718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.860030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.860038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.860342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.860657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.860665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.860969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.861284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.861292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.861574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.861848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.861855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.862157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.862443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.862450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.862658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.862930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.862937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.863224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.863538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.863549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.863823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.864028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.864035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.864195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.864513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.864521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.864830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.865143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.865151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.865452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.865776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.865783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.866090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.866277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.866284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.866619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.866952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.866959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.867267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.867482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.867490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.867793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.868098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.868105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.868428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.868717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.868725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.228 [2024-05-15 11:12:48.869032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.869347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.228 [2024-05-15 11:12:48.869354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.228 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.869660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.869970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.869977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.870277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.870594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.870602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.870916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.871235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.871242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.871543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.871862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.871869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.872079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.872246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.872254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.872520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.872805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.872814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.873159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.873464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.873473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.873793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.874103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.874111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.874416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.874736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.874744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.875042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.875293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.875301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.875597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.875898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.875905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.876235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.876527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.876534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.876829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.877141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.877148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.877475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.877785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.877793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.499 [2024-05-15 11:12:48.878101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.878417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.499 [2024-05-15 11:12:48.878424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.499 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.878720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.878972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.878979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.879279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.879596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.879604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.879907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.880158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.880166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.880471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.880793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.880801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.881092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.881409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.881417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.881603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.881969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.881976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.882271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.882591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.882598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.882865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.883104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.883111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.883303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.883610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.883618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.883920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.884233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.884240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.884566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.884890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.884897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.885209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.885379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.885387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.885695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.886016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.886023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.886304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.886614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.886622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.886928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.887247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.887254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.887549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.887738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.887746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.888059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.888369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.888376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.888662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.888952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.888959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.889119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.889424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.889432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.889718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.890043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.890051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.890360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.890695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.890703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.891016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.891346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.891353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.891676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.891740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.891747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.891926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.892242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.892250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.892558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.892875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.892883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.893208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.893518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.893526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.893802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.894123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.894130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.894504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.894756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.894764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.895091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.895403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.895411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.500 qpair failed and we were unable to recover it. 00:26:52.500 [2024-05-15 11:12:48.895750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.500 [2024-05-15 11:12:48.896058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.896065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.896371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.896699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.896706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.897030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.897363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.897371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.897693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.897959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.897967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.898272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.898604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.898611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.898943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.899232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.899239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.899540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.899706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.899714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.899991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.900307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.900314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.900637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.900925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.900932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.901226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.901495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.901504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.901709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.902043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.902051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.902376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.902663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.902671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.902972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.903286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.903293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.903606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.903923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.903930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.904220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.904535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.904542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.904821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.905138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.905146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.905448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.905765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.905773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.906095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.906407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.906415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.906716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.907022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.907030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.907342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.907630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.907638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.907933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.908278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.908286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.908459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.908763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.908771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.909078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.909405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.909412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.909672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.909970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.909978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.910275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.910586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.910594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.910899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.911213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.911220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.911438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.911746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.911755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.912062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.912347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.912355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.912660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.912981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.912988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.913310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.913607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.501 [2024-05-15 11:12:48.913614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.501 qpair failed and we were unable to recover it. 00:26:52.501 [2024-05-15 11:12:48.913922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.914233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.914240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.914550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.914840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.914847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.915136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.915455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.915462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.915819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.916103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.916110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.916412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.916729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.916736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.917026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.917235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.917242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.917558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.917835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.917842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.918143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.918459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.918466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.918756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.919068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.919075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.919382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.919699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.919707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.919981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.920265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.920273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.920616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.920902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.920909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.921201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.921522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.921531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.921849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.922050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.922058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.922359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.922672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.922680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.922986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.923304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.923312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.923617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.923934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.923942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.924234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.924548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.924556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.924829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.925139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.925148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.925442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.925766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.925774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.926058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.926308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.926316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.926622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.926936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.926943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.927254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.927571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.927579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.927885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.928194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.928201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.928512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.928836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.928843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.929184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.929495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.929502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.929778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.930104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.930111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.930412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.930730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.930738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.931045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.931364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.931372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.502 [2024-05-15 11:12:48.931692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.932009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.502 [2024-05-15 11:12:48.932016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.502 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.932317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.932632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.932640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.932952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.933240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.933248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.933549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.933850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.933858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.934153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.934476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.934483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.934752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.935002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.935009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.935306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.935627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.935634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.935943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.936262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.936269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.936570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.936843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.936850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.937140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.937418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.937426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.937778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.938060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.938068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.938233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.938539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.938553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.938860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.939173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.939180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.939482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.939800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.939810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.940113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.940399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.940407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.940696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.940854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.940862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.941179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.941464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.941471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.941648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.941943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.941950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.942272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.942562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.942569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.942877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.943211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.943218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.943519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.943843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.943851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.944144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.944464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.944472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.944775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.945108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.945115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.945398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.945653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.945661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.503 qpair failed and we were unable to recover it. 00:26:52.503 [2024-05-15 11:12:48.945935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.503 [2024-05-15 11:12:48.946201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.946209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.946512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.946832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.946840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.947149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.947477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.947485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.947777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.948088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.948096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.948405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.948727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.948735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.949035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.949348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.949356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.949663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.949958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.949965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.950254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.950544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.950556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.950828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.951141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.951148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.951468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.951782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.951792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.952100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.952430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.952437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.952757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.953072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.953080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.953287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.953598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.953606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.953912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.954226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.954233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.954541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.954813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.954820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.955110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.955360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.955368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.955542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.955829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.955837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.956143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.956336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.956343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.956650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.956973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.956981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.957299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.957629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.957638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.957942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.958258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.958266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.958555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.958836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.958843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.959153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.959437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.959445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.959721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.960023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.960031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.960315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.960634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.960642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.960941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.961257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.961264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.961574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.961890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.961898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.962260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.962486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.962493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.962795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.962994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.963001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.504 [2024-05-15 11:12:48.963288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.963600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.504 [2024-05-15 11:12:48.963607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.504 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.963933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.964233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.964241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.964393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.964708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.964717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.965015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.965302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.965310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.965603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.965826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.965833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.965994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.966263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.966272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.966574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.966888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.966895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.967216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.967412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.967419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.967726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.968055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.968062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.968364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.968678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.968685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.968975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.969288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.969295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.969604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.969896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.969903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.970212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.970490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.970496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.970791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.971106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.971113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.971417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.971733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.971741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.972046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.972359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.972366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.972654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.972959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.972967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.973271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.973558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.973566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.973857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.974145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.974152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.974527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.974816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.974825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.975128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.975443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.975450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.975757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.976073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.976079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.976367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.976686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.976694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.976996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.977236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.977243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.977543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.977833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.977840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.978133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.978335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.978342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.978601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.978920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.978927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.979234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.979558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.979565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.979871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.980203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.980210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.980512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.980825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.980833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-05-15 11:12:48.981145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.981462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.505 [2024-05-15 11:12:48.981470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.981623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.981964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.981972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.982272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.982553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.982561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.982768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.983092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.983099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.983422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.983662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.983670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.983985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.984297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.984304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.984464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.984742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.984750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.985049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.985360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.985367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.985561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.985866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.985873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.986173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.986485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.986492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.986788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.987102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.987109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.987259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.987573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.987581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.987884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.988193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.988200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.988492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.988807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.988814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.988851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.989122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.989129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.989443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.989637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.989644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.989939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.990273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.990281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.990583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.990776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.990783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.991078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.991207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.991214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.991386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.991671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.991677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.992027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.992311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.992317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.992620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.992938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.992944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.993255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.993588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.993594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.993916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.994206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.994212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.994484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.994781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.994789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.995078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.995363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.995370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.995673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.995972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.995980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.996282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.996606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.996614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.996923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.997247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.997255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.997505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.997819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.997828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-05-15 11:12:48.998131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.506 [2024-05-15 11:12:48.998442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:48.998450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:48.998830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:48.999117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:48.999125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:48.999425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:48.999752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:48.999761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.000058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.000343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.000351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.000638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.000915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.000924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.001274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.001475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.001484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.001753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.002075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.002083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.002374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.002679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.002687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.003001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.003276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.003285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.003591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.003766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.003774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.004046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.004373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.004381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.004678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.004994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.005002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.005295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.005614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.005622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.005911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.006232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.006240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.006550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.006802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.006810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.007117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.007447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.007456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.007746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.008080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.008087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.008394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.008638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.008645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.008940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.009224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.009232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.009551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.009829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.009836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.010116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.010427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.010434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.010611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.010874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.010881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.011239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.011526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.011533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.011915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.012246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.012254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.012557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.012878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.012885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.013209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.013505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.013512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.013843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.014154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.014162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.014466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.014747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.014754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.015046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.015357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.015364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.015676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.015955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.507 [2024-05-15 11:12:49.015962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.507 qpair failed and we were unable to recover it. 00:26:52.507 [2024-05-15 11:12:49.016256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.016574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.016582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.016900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.017217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.017226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.017514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.017818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.017826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.018142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.018429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.018436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.018615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.018918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.018926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.019299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.019590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.019598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.019923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.020195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.020203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.020492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.020784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.020792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.020972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.021252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.021260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.021587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.021871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.021878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.022198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.022516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.022523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.022825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.023153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.023160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.023319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.023578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.023587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.023767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.024031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.024039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.024367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.024650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.024658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.025038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.025316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.025324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.025637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.025950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.025958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.026252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.026564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.026571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.026889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.027213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.027221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.027376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.027645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.027653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.027972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.028285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.028292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.028599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.028914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.028922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.029105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.029399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.029406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.029709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.030036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.030044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.508 [2024-05-15 11:12:49.030340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.030620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.508 [2024-05-15 11:12:49.030627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.508 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.030931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.031258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.031266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.031560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.031832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.031839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.032045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.032307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.032314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.032622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.032812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.032819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.033111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.033426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.033433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.033738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.034069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.034076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.034363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.034690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.034698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.035001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.035270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.035278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.035585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.035870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.035877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.036190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.036522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.036530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.036841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.037137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.037145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.037446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.037759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.037767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.038060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.038351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.038358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.038533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.038813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.038821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.039014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.039329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.039337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.039670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.039960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.039967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.040270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.040565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.040574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.040732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.041067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.041074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.041424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.041710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.041718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.042030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.042356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.042362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.042654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.042986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.042994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.043296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.043607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.043614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.043924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.044200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.044208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.044383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.044670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.044677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.044984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.045186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.045193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.045371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.045705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.045712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.046019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.046309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.046318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.046607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.046944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.046951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.047255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.047603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.509 [2024-05-15 11:12:49.047610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.509 qpair failed and we were unable to recover it. 00:26:52.509 [2024-05-15 11:12:49.047861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.048189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.048196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.048505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.048827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.048835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.049124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.049403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.049411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.049725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.049952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.049959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.050260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.050551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.050558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.050847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.051161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.051169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.051454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.051763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.051771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.052072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.052404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.052411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.052719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.053028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.053036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.053336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.053649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.053656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.053928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.054236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.054243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.054549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.054706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.054714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.055023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.055334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.055342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.055659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.055950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.055958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.056283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.056574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.056582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.056873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.057140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.057148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.057446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.057735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.057743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.057950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.058120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.058128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.058457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.058779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.058788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.059089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.059419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.059428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.059850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.060136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.060145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.060449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.060760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.060769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.061058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.061368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.061376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.061693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.061979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.061987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.062291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.062594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.062603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.062897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.063245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.063254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.063534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.063848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.063856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.064160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.064449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.064457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.064777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.065090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.065098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.510 [2024-05-15 11:12:49.065254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.065565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.510 [2024-05-15 11:12:49.065573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.510 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.065869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.066183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.066191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.066427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.066767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.066776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.067078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.067389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.067397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.067773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.068084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.068092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.068380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.068694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.068703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.069002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.069352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.069360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.069661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.069973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.069982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.070298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.070611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.070620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.070922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.071229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.071237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.071538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.071851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.071859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.072162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.072477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.072485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.072828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.073141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.073149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.073436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.073716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.073724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.074039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.074342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.074350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.074660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.074953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.074960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.075249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.075498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.075508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.075835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.076147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.076154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.076453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.076617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.076625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.076929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.077241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.077249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.077551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.077863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.077870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.078157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.078470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.078478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.078787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.079100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.079107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.079453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.079738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.079745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.080106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.080426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.080434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.080769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.080958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.080965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.081289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.081620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.081627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.082001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.082287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.082294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.082593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.082837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.082845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.083141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.083436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.083443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.511 qpair failed and we were unable to recover it. 00:26:52.511 [2024-05-15 11:12:49.083762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.084050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.511 [2024-05-15 11:12:49.084057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.084346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.084658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.084665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.084829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.085132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.085141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.085459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.085784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.085791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.086103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.086437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.086445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.086766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.087078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.087085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.087391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.087705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.087713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.088005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.088319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.088326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.088527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.088833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.088841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.089145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.089464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.089471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.089814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.090136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.090143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.090461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.090782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.090789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.091097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.091413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.091419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.091806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.092090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.092098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.092263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.092602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.092610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.092895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.093209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.093216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.093493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.093875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.093882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.094065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.094381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.094390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.094709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.095021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.095029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.095353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.095635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.095642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.095942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.096257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.096264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.096564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.096877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.096884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.097192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.097500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.097508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.097822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.098132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.098139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.512 qpair failed and we were unable to recover it. 00:26:52.512 [2024-05-15 11:12:49.098445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.512 [2024-05-15 11:12:49.098765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.098773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.098950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.099231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.099239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.099544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.099833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.099840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.100138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.100175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.100182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.100456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.100603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.100611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.100891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.101168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.101177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.101479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.101752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.101760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.102117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.102445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.102453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.102770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.103081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.103089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.103414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.103745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.103752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.103905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.104222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.104230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.104586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.104869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.104876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.105168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.105477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.105485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.105792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.106105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.106113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.106419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.106713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.106721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.107101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.107342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.107349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.107561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.107887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.107895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.108169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.108492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.108499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.108779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.109096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.109103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.109426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.109731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.109738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.110004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.110319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.110326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.110618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.110946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.110953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.111260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.111576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.111584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.111775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.112035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.112042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.112355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.112644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.112652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.112954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.113093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.113100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.113368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.113566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.113573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.113854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.114051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.114059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.114362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.114673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.114680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.114988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.115284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.513 [2024-05-15 11:12:49.115292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.513 qpair failed and we were unable to recover it. 00:26:52.513 [2024-05-15 11:12:49.115588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.115924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.115932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.116252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.116533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.116541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.116863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.117033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.117041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.117340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.117650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.117657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.117857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.118124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.118132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.118422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.118715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.118723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.119024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.119337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.119344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.119645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.119960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.119967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.120276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.120636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.120643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.120863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.121191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.121198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.121498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.121799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.121806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.122102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.122421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.122428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.122743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.123056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.123065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.123374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.123676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.123685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.123990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.124302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.124310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.124616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.124931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.124940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.125255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.125538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.125549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.125819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.126133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.126140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.126499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.126714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.126721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.127027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.127347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.127354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.127660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.127974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.127981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.128142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.128517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.128524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.128688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.129000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.129007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.129313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.129639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.129646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.129810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.130108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.130116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.130406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.130726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.130735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.131033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.131348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.131355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.131655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.131853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.131861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.132261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.132592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.132599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.514 [2024-05-15 11:12:49.132785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.133103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.514 [2024-05-15 11:12:49.133110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.514 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.133416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.133728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.133735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.134039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.134327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.134335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.134636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.134920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.134928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.135249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.135579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.135586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.135886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.136137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.136145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.136455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.136629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.136638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.136957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.137271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.137278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.137609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.137910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.137918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.138264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.138574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.138581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.138891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.139080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.139087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.139299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.139568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.139577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.139782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.140074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.140081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.140388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.140708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.140716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.141015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.141298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.141305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.141604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.141912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.141919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.515 [2024-05-15 11:12:49.142259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.142606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.515 [2024-05-15 11:12:49.142617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.515 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.142903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.143229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.143237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.143542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.143861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.143869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.144168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.144480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.144488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.144684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.144999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.145007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.145328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.145647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.145654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.145927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.146257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.146266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.146605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.146906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.146915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.147206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.147542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.147553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.147873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.148159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.148168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.148469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.148778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.148786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.149092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.149402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.149409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.149684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.149957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.149965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.150270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.150555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.150564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.150876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.151189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.151196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.151504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.151819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.151826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.152109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.152422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.152429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.152607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.152888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.152895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.153200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.153514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.153522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.153839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.154149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.154156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.154446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.154726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.154733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.154992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.155253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.155260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.155574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.155843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.155851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.156144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.156459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.156466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.156726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.157034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.157041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.157359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.157654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.786 [2024-05-15 11:12:49.157661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.786 qpair failed and we were unable to recover it. 00:26:52.786 [2024-05-15 11:12:49.157959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.158269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.158277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.158585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.158903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.158910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.159234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.159570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.159577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.159879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.160192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.160199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.160495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.160808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.160815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.161119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.161400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.161408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.161570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.161844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.161851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.162157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.162502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.162802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.162984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.162991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.163280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.163602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.163610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.163892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.164205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.164212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.164506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.164828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.164835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.165140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.165459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.165467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.165766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.166095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.166102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.166431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.166634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.166641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.166938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.167252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.167259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.167564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.167882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.167889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.168191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.168478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.168485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.168807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.169141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.169148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.169451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.169763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.169771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.170073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.170387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.170394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.170705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.171030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.171037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.171374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.171645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.171652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.171946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.172259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.172268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.172567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.172879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.172887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.173199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.173535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.173542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.173844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.174155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.174162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.174325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.174512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.174519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.174803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.175136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.175144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.787 qpair failed and we were unable to recover it. 00:26:52.787 [2024-05-15 11:12:49.175444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.175756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.787 [2024-05-15 11:12:49.175763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.176088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.176396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.176403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.176700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.176855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.176863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.177121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.177434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.177441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.177765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.177954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.177961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.178283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.178599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.178606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.178933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.179219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.179227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.179417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.179732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.179739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.180048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.180300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.180307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.180597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.180935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.180943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.181242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.181553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.181561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.181869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.182199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.182206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.182515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.182695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.182703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.183006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.183310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.183317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.183607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.183914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.183921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.184223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.184534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.184541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.184831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.185132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.185139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.185428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.185742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.185750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.186057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.186386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.186394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.186737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.187069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.187077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.187378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.187696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.187704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.188005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.188322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.188329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.188623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.188954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.188962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.189263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.189574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.189581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.189885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.190215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.190222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.190259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.190557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.190565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.788 qpair failed and we were unable to recover it. 00:26:52.788 [2024-05-15 11:12:49.190843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.788 [2024-05-15 11:12:49.191163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.191171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.191485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.191798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.191805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.192114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.192424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.192432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.192709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.193028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.193035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.193348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.193640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.193647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.193843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.194148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.194155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.194462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.194626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.194634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.194931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.195241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.195248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.195542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.195823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.195831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.196133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.196445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.196452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.196736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.197051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.197059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.197359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.197569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.197577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.197858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.198186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.198194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.198507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.198789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.198797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.199101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.199415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.199423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.199713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.200042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.200049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.200358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.200669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.200677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.201036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.201345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.201353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.201650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.201982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.201989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.202279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.202505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.202512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.202807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.203129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.203137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.203441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.203717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.203724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.204034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.204345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.204352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.204643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.204958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.204965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.205273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.205562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.205570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.205874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.206058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.206065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.206393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.206681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.206688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.207010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.207323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.207332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.207640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.207946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.207954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.208161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.208450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.789 [2024-05-15 11:12:49.208459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.789 qpair failed and we were unable to recover it. 00:26:52.789 [2024-05-15 11:12:49.208806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.209094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.209103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.209426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.209600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.209609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.209931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.210263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.210270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.210572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.210892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.210900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.211208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.211519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.211526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.211813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.212124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.212131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.212425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.212735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.212742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.213044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.213376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.213383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.213700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.214016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.214023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.214199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.214477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.214486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.214793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.215088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.215097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.215223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.215423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.215431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.215743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.216061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.216068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.216447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.216710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.216718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.216902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.217209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.217216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.217527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.217816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.217823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.218130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.218449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.218456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.218744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.219058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.219065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.219384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.219718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.219725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.220105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.220393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.220402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.790 qpair failed and we were unable to recover it. 00:26:52.790 [2024-05-15 11:12:49.220707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.790 [2024-05-15 11:12:49.220996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.221006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.221291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.221514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.221520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.221719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.222055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.222062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.222368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.222571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.222579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.222779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.223067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.223074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.223361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.223672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.223680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.223983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.224161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.224169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.224473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.224794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.224802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.225109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.225425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.225433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.225756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.226068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.226075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.226384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.226688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.226698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.226992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.227306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.227313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.227474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.227748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.227756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.228058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.228409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.228416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.228715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.229045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.229053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.229354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.229633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.229640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.229841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.230151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.230160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.230431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.230745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.230753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.231061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.231347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.231354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.231658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.231972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.231979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.232284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.232614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.232625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.232909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.233161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.233169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.233475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.233787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.233795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.234102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.234386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.234393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.234719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.235048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.235055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.235395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.235636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.235643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.235936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.236220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.236227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.236534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.236748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.236756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.237076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.237394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.791 [2024-05-15 11:12:49.237401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.791 qpair failed and we were unable to recover it. 00:26:52.791 [2024-05-15 11:12:49.237734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.238021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.238028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.238335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.238646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.238655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.238960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.239236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.239244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.239554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.239870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.239878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.240151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.240463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.240471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.240771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.241069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.241077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.241395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.241703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.241710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.242013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.242332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.242340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.242655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.242929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.242936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.243210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.243519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.243526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.243836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.244150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.244157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.244459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.244775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.244783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.245092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.245263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.245270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.245570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.245863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.245870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.246165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.246487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.246495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.246668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.247050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.247058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.247387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.247689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.247697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.248018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.248320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.248327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.248594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.248927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.248935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.249235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.249556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.249564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.249887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.250199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.250206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.250481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.250734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.250742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.792 [2024-05-15 11:12:49.251097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.251420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.792 [2024-05-15 11:12:49.251427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.792 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.251729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.252023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.252030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.252357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.252680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.252688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.252997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.253318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.253325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.253521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.253843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.253850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.254159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.254313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.254321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.254607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.254898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.254905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.255082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.255413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.255420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.255626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.255916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.255924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.256229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.256561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.256568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.256833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.257089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.257097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.257392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.257687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.257696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.258009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.258331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.258338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.258641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.258946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.258953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.259126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.259298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.259305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.259614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.259903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.259910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.260221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.260510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.260517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.260854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.261143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.261150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.261440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.261641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.261648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.261950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.262265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.262272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.262565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.262865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.262872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.263075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.263248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.263256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.263423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.263711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.263719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.264033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.264354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.264362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.264654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.264970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.264978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.265274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.265552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.265559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.265829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.266115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.266122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.266470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.266753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.266761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.267065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.267351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.793 [2024-05-15 11:12:49.267359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.793 qpair failed and we were unable to recover it. 00:26:52.793 [2024-05-15 11:12:49.267653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.267948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.267955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.268284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.268589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.268596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.268915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.269230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.269238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.269539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.269842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.269850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.270126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.270453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.270460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.270757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.270959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.270967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.271256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.271570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.271577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.271930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.272243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.272251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.272543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.272839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.272846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.273141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.273310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.273317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.273585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.273849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.273857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.274235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.274522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.274529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.274830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.275080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.275087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.275378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.275710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.275718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.276020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.276236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.276244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.276537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.276831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.276839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.277142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.277451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.277458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.277774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.278072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.278079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.278395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.278707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.278715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.279038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.279389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.279397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.279585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.279862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.279870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.280173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.280467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.280475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.280769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.281099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.281107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.281418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.281714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.281722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.282014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.282327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.282334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.282610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.282903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.282910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.283222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.283517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.794 [2024-05-15 11:12:49.283524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.794 qpair failed and we were unable to recover it. 00:26:52.794 [2024-05-15 11:12:49.283806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.284120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.284128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.284435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.284746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.284754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.285037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.285219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.285228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.285531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.285828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.285837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.285996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.286303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.286312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.286609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.286924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.286933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.287220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.287533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.287541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.287877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.288069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.288076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.288393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.288725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.288733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.288911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.289088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.289095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.289384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.289587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.289594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.289915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.290247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.290255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.290557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.290850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.290857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.291051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.291371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.291378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.291691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.292014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.292021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.292296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.292590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.292598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.292901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.293215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.293223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.293404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.293733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.293741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.294082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.294396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.294404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.294703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.295029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.295036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.295343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.295499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.295508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.295782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.296101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.296109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.296432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.296757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.296764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.297076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.297393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.297400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.795 qpair failed and we were unable to recover it. 00:26:52.795 [2024-05-15 11:12:49.297694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.795 [2024-05-15 11:12:49.297974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.297981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.298185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.298454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.298462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.298710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.299012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.299019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.299338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.299630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.299638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.299943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.300268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.300275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.300585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.300889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.300896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.301098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.301417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.301426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.301721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.302034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.302041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.302349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.302657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.302664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.302996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.303318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.303325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.303648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.303927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.303934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.304244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.304516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.304524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.304815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.305085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.305093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.305408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.305553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.305564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.305839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.306090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.306098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.306396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.306709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.306717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.307037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.307350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.307357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.307656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.307977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.307984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.308314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.308490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.308497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.308807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.309104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.309111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.309413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.309706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.309713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.309876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.310191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.310200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.310520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.310698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.310705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.310891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.311174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.311182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.311469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.311815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.311822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.312129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.312447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.312454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.796 qpair failed and we were unable to recover it. 00:26:52.796 [2024-05-15 11:12:49.312765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.313067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.796 [2024-05-15 11:12:49.313074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.313377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.313705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.313713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.314020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.314321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.314329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.314651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.314956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.314964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.315267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.315595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.315605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.315905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.316216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.316224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.316533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.316809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.316818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.317079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.317394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.317402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.317726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.318026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.318034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.318332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.318663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.318672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.318977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.319295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.319303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.319648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.319967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.319975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.320311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.320481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.320490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.320801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.321114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.321122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.321285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.321590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.321601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.321905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.322076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.322084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.322349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.322658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.322666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.322972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.323154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.323163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.323375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.323641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.323649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.323956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.324267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.324274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.324571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.324758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.324766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.325074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.325378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.325385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.325682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.325985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.325993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.326291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.326601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.326609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.326900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.327097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.327106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.327411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.327581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.327589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.327858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.328189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.328196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.328499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.328788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.328796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.329075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.329384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.329392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.329702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.330032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.797 [2024-05-15 11:12:49.330039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.797 qpair failed and we were unable to recover it. 00:26:52.797 [2024-05-15 11:12:49.330344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.330653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.330660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.330975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.331288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.331295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.331503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.331690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.331698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.332017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.332294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.332302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.332618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.332901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.332909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.333220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.333533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.333540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.333829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.334142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.334149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.334440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.334718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.334725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.335047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.335325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.335333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.335660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.335942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.335950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.336239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.336554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.336561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.336874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.337047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.337056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.337361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.337525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.337533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.337856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.338172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.338180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.338521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.338806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.338815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.339129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.339377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.339385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.339686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.339969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.339977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.340152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.340325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.340333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.340634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.340822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.340829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.341170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.341454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.341461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.341795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.342125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.342132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.342439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.342607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.342616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.342984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.343297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.343305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.343606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.343941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.343949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.344247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.344566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.344573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.798 qpair failed and we were unable to recover it. 00:26:52.798 [2024-05-15 11:12:49.344867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.798 [2024-05-15 11:12:49.345187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.345194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.345386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.345564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.345571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.345842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.346160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.346167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.346469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.346792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.346800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.347069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.347384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.347391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.347703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.347988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.347995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.348263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.348580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.348587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.348905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.349233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.349239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.349561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.349842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.349849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.350142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.350313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.350321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.350592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.350920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.350929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.351113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.351378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.351385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.351563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.351858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.351865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.352189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.352503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.352512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.352815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.353108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.353116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.353420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.353737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.353745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.354037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.354342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.354350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.354654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.354954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.354962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.355262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.355592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.355600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.355903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.356217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.356224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.356493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.356813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.356821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.357145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.357468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.357476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.357674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.357985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.357992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.358297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.358629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.358636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.358959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.359266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.359274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.359544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.359836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.359843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.360160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.360459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.360466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.360771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.361082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.361088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.361398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.361708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.361716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.361884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.362188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.799 [2024-05-15 11:12:49.362197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.799 qpair failed and we were unable to recover it. 00:26:52.799 [2024-05-15 11:12:49.362496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.362744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.362753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.362916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.363105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.363112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.363442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.363754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.363761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.363918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.364251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.364259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.364563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.364879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.364886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.365068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.365352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.365359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.365673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.366003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.366011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.366334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.366619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.366628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.366931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.367244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.367251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.367577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.367884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.367892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.368073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.368250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.368260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.368531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.368860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.368868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.369172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.369499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.369507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.369800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.370111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.370120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.370325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.370635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.370642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.370931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.371251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.371258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.371560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.371876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.371883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.372056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.372303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.372310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.372607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.372935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.372942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.373243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.373551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.373558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.800 qpair failed and we were unable to recover it. 00:26:52.800 [2024-05-15 11:12:49.373740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.800 [2024-05-15 11:12:49.374053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.374061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.374390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.374700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.374708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.375041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.375353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.375361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.375664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.375977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.375985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.376179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.376497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.376504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.376835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.377175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.377182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.377520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.377843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.377851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.378153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.378253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.378261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.378558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.378872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.378879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.379195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.379381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.379388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.379683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.380005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.380013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.380193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.380493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.380500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.380811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.381145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.381152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.381326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.381622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.381630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.381944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.382261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.382268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.382595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.382890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.382897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.383203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.383386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.383394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.383685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.383997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.384004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.384308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.384622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.384630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.384962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.385255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.385262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.385575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.385908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.385915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.386098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.386426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.386434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.386727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.387017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.387025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.387314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.387625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.387633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.387909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.388221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.388228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.388536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.388828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.388836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.389130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.389449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.389457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.389754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.390066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.390073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.390376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.390726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.390734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.391027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.391347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.801 [2024-05-15 11:12:49.391354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.801 qpair failed and we were unable to recover it. 00:26:52.801 [2024-05-15 11:12:49.391651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.391963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.391970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.392262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.392544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.392555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.392880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.393213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.393220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.393522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.393813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.393820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.394125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.394454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.394462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.394772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.395112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.395120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.395443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.395758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.395765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.396075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.396409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.396418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.396720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.397019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.397027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.397332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.397497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.397506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.397820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.398151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.398160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.398508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.398782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.398790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.399095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.399409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.399418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.399693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.399793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.399800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.400059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.400248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.400255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.400559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.400864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.400872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.401188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.401479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.401487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.401781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.402104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.402111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.402489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.402755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.402763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.403060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.403367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.403375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.403552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.407564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.407586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.407951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.408258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.408268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.408576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.408773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.408781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.409112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.409406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.409414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.409732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.410052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.410059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.410353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.410668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.410676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.410978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.411265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.411272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.411567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.411835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.411842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.412147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.412436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.412444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.802 qpair failed and we were unable to recover it. 00:26:52.802 [2024-05-15 11:12:49.412753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.413068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.802 [2024-05-15 11:12:49.413075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.413334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.413663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.413674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.413997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.414345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.414353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.414652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.414966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.414973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.415264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.415515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.415522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.415832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.416143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.416150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.416444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.416756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.416763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.417074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.417403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.417410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.417737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.418041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.418048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.418353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.418536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.418547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.418827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.419139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.419147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.419492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.419787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.419798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.420079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.420232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.420239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.420580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.420908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.420916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.421217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.421533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.421540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.421865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.422148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.422156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.422463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.422682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.422689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.422981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.423312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.423320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.423617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.423931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.423939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.424238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.424552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.424560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.424837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.425024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.425032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.425348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.425672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.425681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.425884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.426190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.426197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.426405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.426657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.426665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.426953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.427136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.427143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.427447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.427749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.427757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.428056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.428370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-05-15 11:12:49.428375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:52.803 qpair failed and we were unable to recover it. 00:26:52.803 [2024-05-15 11:12:49.428678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.428999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.429006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.429325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.429640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.429646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.429948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.430257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.430263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.430572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.430835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.430842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.431152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.431356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.431363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.431677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.431995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.432003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.432303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.432621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.432630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.432873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.433159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.433167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.433477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.433815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.433823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.434139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.434466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.434475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.434774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.435091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.435099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.435366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.435682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.435691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.435878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.436205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.436213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.436496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.436824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.436832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.070 qpair failed and we were unable to recover it. 00:26:53.070 [2024-05-15 11:12:49.437123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.070 [2024-05-15 11:12:49.437450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.437458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.437787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.438044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.438052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.438402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.438690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.438698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.439027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.439316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.439325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.439626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.439959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.439967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.440250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.440537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.440557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.440862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.441187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.441195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.441534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.441817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.441826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.442151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.442407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.442415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.442715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.443049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.443058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.443376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.443657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.443666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.443978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.444295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.444303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.444604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.444907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.444915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.445231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.445517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.445526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.445667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.445951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.445959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.446234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.446530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.446538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.446865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.447192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.447200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.447506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.447799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.447808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.448110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.448377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.448385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.448713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.449019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.449027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.449332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.449660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.449667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.449977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.450269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.450277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.450555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.450826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.450833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.451153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.451490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.451497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.451802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.452118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.452125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.452467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.452749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.452757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.453078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.453415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.453422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.453750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.454087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.454094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.454312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.454612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.454620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.454934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.455273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.455280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.455588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.455883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.455891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.456188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.456508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.456516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.456675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.456951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.456959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.457258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.457419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.457427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.457704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.457996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.458005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.458293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.458601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.458608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.458930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.459089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.459096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.459411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.459724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.459733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.459977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.460329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.460337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.460659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.460974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.460981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.461276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.461477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.461485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.461802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.462005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.462012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.071 qpair failed and we were unable to recover it. 00:26:53.071 [2024-05-15 11:12:49.462274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.462591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.071 [2024-05-15 11:12:49.462599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.463018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.463353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.463360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.463663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.463980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.463987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.464294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.464630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.464638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.464835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.465096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.465104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.465391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.465661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.465669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.465856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.466191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.466198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.466503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.466836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.466844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.467138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.467452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.467459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.467736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.468034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.468041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.468215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.468396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.468403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.468710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.469038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.469046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.469345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.469548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.469555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.469714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.469983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.469991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.470303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.470616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.470623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.470939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.471271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.471278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.471482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.471689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.471697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.471992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.472311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.472318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.472619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.472956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.472962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.473263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.473585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.473593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.473961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.474154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.474161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.474532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.474801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.474809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.474990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.475282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.475290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.475593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.475965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.475973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.476264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.476574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.476582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.476904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.477218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.477225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.477534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.477872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.477880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.478180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.478499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.478506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.478826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.479123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.479130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.479422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.479999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.480016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.480328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.480629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.480636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.480719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.480999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.481007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.481314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.481611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.481619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.481989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.482326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.482333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.482625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.482884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.482892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.483201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.483478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.483485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.483776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.484106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.484113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.484428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.484711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.484718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.485036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.485354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.485361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.485668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.485990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.485997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.486300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.486625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.486633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.486928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.487141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.487148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.487447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.487647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.487654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.487983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.488270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.488277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.072 qpair failed and we were unable to recover it. 00:26:53.072 [2024-05-15 11:12:49.488597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.072 [2024-05-15 11:12:49.488906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.488913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.489209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.489405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.489412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.489614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.489874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.489881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.490187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.490347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.490354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.490736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.491022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.491030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.491238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.491411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.491418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.491742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.491923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.491930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.492243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.492498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.492506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.492829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.493116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.493124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.493421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.493630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.493638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.493810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.493849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.493856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.494152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.494485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.494492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.494719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.495013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.495020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.495329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.495692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.495699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.495979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.496288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.496295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.496610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.496917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.496925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.497142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.497337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.497345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.497599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.497881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.497889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.498124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.498438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.498445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.498649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.498930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.498937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.499240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.499529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.499536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.499867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.500146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.500154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.500458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.500756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.500764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.501062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.501393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.501400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.501702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.502031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.502038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.502361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.502658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.502668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.502876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.503201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.503208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.503505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.503774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.503782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.504077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.504389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.504396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.504757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.505070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.505077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.505375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.505635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.505643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.505818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.506136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.506144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.506325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.506598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.506606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.506891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.507697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.507719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.507922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.508127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.508135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.508314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.508590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.508600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.073 qpair failed and we were unable to recover it. 00:26:53.073 [2024-05-15 11:12:49.508930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.509736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.073 [2024-05-15 11:12:49.509751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.509962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.510268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.510275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.510466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.510743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.510750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.511047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.511353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.511361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.511674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.511897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.511904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.512224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.512409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.512416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.512721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.513046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.513054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.513347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.513587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.513594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.513908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.514224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.514232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.514525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.514725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.514734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.515054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.515373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.515381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.515662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.515875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.515883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.516079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.516357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.516364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.516658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.516993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.517000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.517317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.517612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.517619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.517914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.518226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.518233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.518523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.518845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.518852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.519145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.519464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.519471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.519831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.520154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.520162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.520478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.520766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.520775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.521089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.521343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.521350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.521658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.521997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.522005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.522315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.522518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.522525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.522907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.523268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.523275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.523567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.523843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.523850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.524143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.524461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.524469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.524740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.525102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.525109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.525317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.525509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.525517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.525812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.526143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.526150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.526329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.526604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.526612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.526934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.527185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.527192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.527496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.527804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.527811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.527994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.528321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.528328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.528619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.528831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.528838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.529142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.529440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.529447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.529653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.529915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.529922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.530242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.530560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.530568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.530858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.531043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.531050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.531364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.531648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.531656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.531871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.532170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.532178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.532489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.532792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.532800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.533102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.533418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.533426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.533725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.534059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.534066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.534355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.534563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.534571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.074 qpair failed and we were unable to recover it. 00:26:53.074 [2024-05-15 11:12:49.534846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.535046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.074 [2024-05-15 11:12:49.535053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.535369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.535693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.535701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.536005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.536324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.536331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.536688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.536985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.536993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.537295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.537568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.537577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.537812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.538076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.538084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.538395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.538712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.538720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.539024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.539335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.539342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.539674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.540008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.540015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.540309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.540560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.540567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.540761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.540979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.540987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.541246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.541573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.541581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.541670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.542006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.542013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.542338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.542665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.542672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.542977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.543265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.543273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.543504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.543787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.543795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.544156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.544489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.544496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.544780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.544982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.544990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.545313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.545600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.545607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.545927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.546097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.546105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.546391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.546523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.546530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.546847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.547058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.547066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.547387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.547617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.547624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.547818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.548136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.548143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.548336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.548612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.548620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.548922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.549231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.549239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.549548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.549738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.549746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.550014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.550344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.550351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.550659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.550952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.550959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.551154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.551511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.551518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.551667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.551876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.551884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.552264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.552485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.552492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.552884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.553067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.553075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.553446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.553747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.553754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.554079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.554401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.554408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.554770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.555102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.555110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.555424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.555723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.555731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.556041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.556374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.556381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.556686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.557013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.557020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.557322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.557654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.557662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.557832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.558085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.558093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.558418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.558726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.558733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.559040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.559199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.559207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.559401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.559694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.559702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.560018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.560333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.560340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.075 qpair failed and we were unable to recover it. 00:26:53.075 [2024-05-15 11:12:49.560634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.075 [2024-05-15 11:12:49.560839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.560846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.561056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.561357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.561364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.561661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.561967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.561974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.562299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.562610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.562618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.562910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.563187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.563194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.563503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.563821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.563828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.564132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.564445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.564452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.564786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.565101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.565109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.565407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.565751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.565759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.566064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.566378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.566385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.566695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.567019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.567026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.567352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.567490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.567498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.567799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.568143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.568150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.568457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.568846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.568854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.569154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.569498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.569506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.569810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.570090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.570098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.570458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.570767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.570774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.571084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.571413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.571420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.571807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.572053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.572060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.572384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.572684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.572692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.573005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.573325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.573332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.573657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.573951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.573958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.574304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.574648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.574656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.575008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.575322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.575329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.575543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.575875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.575883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.576197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.576497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.576505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.576827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.577147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.577154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.577467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.577748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.577756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.578067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.578406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.578414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.578720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.579051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.579058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.579363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.579698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.579706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.580037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.580323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.580331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.580656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.580953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.580961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.581286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.581611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.581618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.581915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.582120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.582127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.582425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.582742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.582749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.583064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.583371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.583378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.583593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.583905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.583912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.076 qpair failed and we were unable to recover it. 00:26:53.076 [2024-05-15 11:12:49.584205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.076 [2024-05-15 11:12:49.584524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.584532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.584830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.585128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.585135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.585337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.585631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.585639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.585956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.586264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.586272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.586491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.586777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.586785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.587092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.587366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.587372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.587684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.587977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.587984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.588279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.588577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.588585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.588874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.589184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.589191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.589514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.589726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.589734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.590061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.590388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.590396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.590701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.590968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.590975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.591293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.591569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.591576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.591899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.592219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.592226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.592537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.592872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.592879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.593163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.593413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.593421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.593739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.594066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.594073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.594379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.594579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.594586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.594901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.595213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.595221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.595520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.595851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.595859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.596046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.596226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.596234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.596455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.596789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.596796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.596985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.597260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.597267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.597573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.597750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.597761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.598097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.598425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.598432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.598769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.599073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.599080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.599283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.599459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.599467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.599784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.599999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.600006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.600276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.600581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.600588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.600933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.601203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.601210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.601555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.601845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.601853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.602158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.602374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.602381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.602688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.603009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.603016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.603328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.603632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.603641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.603831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.604129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.604136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.604435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.604761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.604768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.605089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.605405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.605412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.605732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.606044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.606051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.606368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.606723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.606731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.607023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.607307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.607314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.607502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.607779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.607787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.608116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.608410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.608417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.608746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.609078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.609085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.077 [2024-05-15 11:12:49.609386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.609711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.077 [2024-05-15 11:12:49.609721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.077 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.610039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.610363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.610370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.610681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.611007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.611015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.611316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.611564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.611571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.611958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.612214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.612221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.612505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.612789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.612798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.613094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.613374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.613381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.613697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.613849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.613857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.614213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.614506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.614514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.614870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.615132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.615140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.615468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.615753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.615770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.616062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.616332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.616339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.616635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.617002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.617009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.617337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.617666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.617674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.618009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.618294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.618302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.618489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.618685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.618693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.618981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.619270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.619277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.619594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.619899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.619906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.620197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.620402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.620410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.620721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.621026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.621033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.621344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.621636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.621644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.622014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.622232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.622240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.622550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.622817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.622825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.623128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.623439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.623447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.623680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.624003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.624010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.624163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.624431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.624438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.624768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.625083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.625090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.625395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.625677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.625685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.625865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.626189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.626197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.626514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.626827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.626834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.626986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.627281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.627288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.627560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.627893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.627901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.628175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.628477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.628484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.628770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.629084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.629091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.629401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.629691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.629699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.629989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.630241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.630249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.630452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.630629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.630638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.630942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.631252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.631259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.631589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.631812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.631819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.632115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.632405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.632413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.632622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.632927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.632935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.633312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.633538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.633549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.633786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.634099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.634106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.634405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.634746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.634754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.635078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.635394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.635402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.635475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.635800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.635809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.636085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.636383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.636391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.636783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.637086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.637094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.078 qpair failed and we were unable to recover it. 00:26:53.078 [2024-05-15 11:12:49.637388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.078 [2024-05-15 11:12:49.637713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.637720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.638021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.638303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.638310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.638650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.638950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.638957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.639283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.639616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.639623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.639928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.640244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.640251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.640547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.640717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.640726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.641092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.641427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.641434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.641549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.641826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.641835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.642142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.642451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.642458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.642773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.642901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.642908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.643206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.643544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.643553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.643892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.644207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.644215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.644532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.644738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.644746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.645057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.645258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.645265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.645611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.645902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.645909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.646237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.646556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.646564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.646905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.647218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.647225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.647400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.647544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.647555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.647769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.648094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.648101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.648467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.648744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.648752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.649078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.649395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.649402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.649710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.650016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.650023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.650349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.650651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.650659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.650992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.651280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.651288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.651577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.651763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.651770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.652068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.652284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.652291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.652477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.652775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.652783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.652988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.653306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.653313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.653533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.653713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.653722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.653899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.654238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.654245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.654574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.654875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.654882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.655191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.655397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.655404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.655514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.655702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.655709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.656028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.656357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.656365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.656663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.657001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.657008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.657333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.657521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.657529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.079 qpair failed and we were unable to recover it. 00:26:53.079 [2024-05-15 11:12:49.657756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.079 [2024-05-15 11:12:49.658090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.658098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.658399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.658704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.658711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.658890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.659186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.659194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.659507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.659704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.659712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.660068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.660360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.660368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.660532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.660831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.660838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.660998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.661233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.661240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.661406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.661715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.661723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.661891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.662072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.662080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.662396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.662720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.662728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.663047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.663305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.663312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.663644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.663953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.663960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.664279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.664489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.664496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.664722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.664967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.664974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.665302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.665483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.665490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.665720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.666001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.666008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.666218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.666267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.666274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.666593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.666864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.666872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.667195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.667453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.667461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.667521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.667799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.667807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.668101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.668279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.668287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.668586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.668853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.668860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.669146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.669441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.669448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.669648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.669978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.669985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.670280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.670461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.670468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.670757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.671078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.671085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.671403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.671714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.671722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.672054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.672226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.672233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.672530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.672844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.672852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.673088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.673285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.673293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.673588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.673779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.673787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.674070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.674387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.674395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.674720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.675032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.675039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.675336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.675538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.675549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.675730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.675999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.676006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.676332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.676610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.676617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.676933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.677254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.677261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.677462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.677706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.677714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.677863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.678178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.678186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.678509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.678705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.678712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.679024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.679341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.679349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.679710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.679972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.679979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.680303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.680618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.680625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.680932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.681271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.681278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.681588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.681867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.681875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.682170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.682487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.682495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.682862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.683150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.683158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.683448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.683755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.683765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.684090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.684406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.684414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.684697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.684851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.684858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.685129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.685448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.685456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.080 qpair failed and we were unable to recover it. 00:26:53.080 [2024-05-15 11:12:49.685838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.080 [2024-05-15 11:12:49.686164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.686172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.686488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.686795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.686804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.687101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.687438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.687445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.687653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.687959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.687966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.688170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.688403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.688411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.688701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.688878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.688885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.689151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.689403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.689412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.689646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.689970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.689978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.690303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.690635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.690643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.690949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.691230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.691237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.691325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.691529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.691536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.691836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.692149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.692156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.692450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.692835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.692842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.693153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.693488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.693496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.693796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.694074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.694081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.694372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.694666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.694674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.694976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.695288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.695296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.695572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.695766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.695774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.695977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.696281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.696289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.696620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.696921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.696929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.697242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.697529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.697536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.697848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.698167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.698175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.698471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.698706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.698714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.699036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.699216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.699223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.699522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.699739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.699746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.700044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.700356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.700363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.700678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.701004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.701013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.701225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.701382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.701391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.701584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.701769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.701775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.702058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.702378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.702386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.702723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.702980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.702988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.703291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.703455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.703462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.703796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.704075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.704083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.704393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.704707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.704715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.705015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.705217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.705225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.705425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.705739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.705747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.706054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.706350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.706357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.706535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.706759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.706767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.707070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.707323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.707330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.707613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.707939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.707947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.708114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.708396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.708403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.708720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.708857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.708864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.709147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.709478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.709486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.709704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.710047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.710054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.710378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.710662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.710669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.710985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.711353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.711360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.711699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.712014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.712022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.712352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.712536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.712543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.712858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.713162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.713169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.713482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.713711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.713718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.713938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.714243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.714250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.714553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.714863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.714870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.715077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.715257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.081 [2024-05-15 11:12:49.715265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.081 qpair failed and we were unable to recover it. 00:26:53.081 [2024-05-15 11:12:49.715478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.082 [2024-05-15 11:12:49.715763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.082 [2024-05-15 11:12:49.715772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.082 qpair failed and we were unable to recover it. 00:26:53.082 [2024-05-15 11:12:49.716032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.082 [2024-05-15 11:12:49.716324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.082 [2024-05-15 11:12:49.716332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.082 qpair failed and we were unable to recover it. 00:26:53.082 [2024-05-15 11:12:49.716611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.716869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.716877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.716968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.717311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.717319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.717513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.717668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.717676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.717996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.718288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.718296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.718606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.718900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.718908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.719192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.719437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.719444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.719754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.720038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.720044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.720236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.720528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.720535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.720846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.721047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.721054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.721351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.721571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.721579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.721775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.722167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.722175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.722266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.722419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.722427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.722778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.723075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.723082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.723409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.723554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.723562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.723863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.724044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.724051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.724343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.724679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.724687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.725029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.725105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.725111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.725437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.725775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.725782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.726081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.726370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.726377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.726707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.727017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.727025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.345 [2024-05-15 11:12:49.727313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.727604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.345 [2024-05-15 11:12:49.727612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.345 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.728036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.728304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.728312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.728464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.728747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.728755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.729069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.729375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.729383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.729673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.729862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.729869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.730163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.730447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.730454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.730762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.731088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.731095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.731393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.731667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.731675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.731933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.732177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.732184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.732470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.732754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.732761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.733019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.733267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.733274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.733574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.733770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.733778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.733966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.734283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.734290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.734588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.734792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.734800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.735101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.735211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.735219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.735400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.735643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.735650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.735847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.736136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.736144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.736429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.736734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.736741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.737047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.737205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.737213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.737476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.737666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.737673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.738017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.738315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.738322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.738633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.738840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.738847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.739031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.739351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.739359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.739577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.739859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.739866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.740165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.740483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.740491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.740784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.741099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.741107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.741315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.741588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.741596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.741783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.741997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.742004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.742314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.742620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.742627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.742893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.743195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.743203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.346 qpair failed and we were unable to recover it. 00:26:53.346 [2024-05-15 11:12:49.743515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.346 [2024-05-15 11:12:49.743730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.743737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.743988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.744288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.744295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.744617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.745028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.745035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.745349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.745657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.745665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.746052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.746304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.746312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.746605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.746890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.746897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.747221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.747524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.747531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.747918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.748136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.748143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.748438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.748749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.748756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.749088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.749411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.749418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.749715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.750052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.750059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.750263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.750476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.750483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.750829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.751116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.751123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.751329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.751600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.751608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.751917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.752244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.752251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.752554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.752818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.752825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.753124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.753413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.753420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.753746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.753911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.753917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.753985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.754287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.754295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.754600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.754899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.754908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.755226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.755562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.755571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.755892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.756076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.756085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.756384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.756697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.756705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.757004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.757336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.757344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.757552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.757854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.757862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.758184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.758508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.758516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.758788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.759116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.759124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.759322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.759644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.759652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.759988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.760301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.760309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.760631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.760702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.347 [2024-05-15 11:12:49.760710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.347 qpair failed and we were unable to recover it. 00:26:53.347 [2024-05-15 11:12:49.760987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.761255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.761263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.761585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.761781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.761791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.762115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.762450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.762459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.762810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.763107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.763115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.763415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.763739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.763748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.764144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.764355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.764364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.764680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.764984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.764993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.765259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.765529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.765537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.765832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.766138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.766146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.766439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.766763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.766771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.767059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.767255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.767263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.767591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.767830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.767838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.768123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.768401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.768410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.768725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.769062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.769070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.769369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.769604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.769612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.769915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.770210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.770218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.770527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.770817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.770825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.771159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.771499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.771507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.771825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.772145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.772154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.772346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.772619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.772626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.772964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.773281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.773289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.773623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.773953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.773960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.774268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.774580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.774589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.774999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.775173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.775181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.775496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.775697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.775705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.776033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.776340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.776348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.776689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.776874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.776881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.777151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.777370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.777377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.348 qpair failed and we were unable to recover it. 00:26:53.348 [2024-05-15 11:12:49.777560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.777791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.348 [2024-05-15 11:12:49.777800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.778011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.778328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.778336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.778661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.778963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.778970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.779060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.779318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.779325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.779440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.779780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.779789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.780081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.780354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.780362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.780689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.781007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.781014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.781217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.781391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.781399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.781601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.781941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.781949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.782250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.782575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.782583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.782898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.783197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.783205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.783401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.783806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.783813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.783892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.784205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.784212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.784439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.784586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.784594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.784831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.785036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.785045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.785346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.785683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.785690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.785888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.786118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.786126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.786291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.786585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.786594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.786913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.787217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.787223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.787533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.787895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.787903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.788004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.788302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.788310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.788645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.788878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.788885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.789091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.789401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.789408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.789587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.789702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.789708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.349 qpair failed and we were unable to recover it. 00:26:53.349 [2024-05-15 11:12:49.789906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.790243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.349 [2024-05-15 11:12:49.790252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.790423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.790786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.790795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.791138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.791313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.791320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.791576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.791766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.791773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.791963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.792255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.792262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.792557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.792767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.792774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.793089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.793362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.793370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.793565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.793843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.793851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.794185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.794339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.794346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.794629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.794891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.794898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.795189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.795375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.795383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.795696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.796017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.796023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.796330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.796654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.796660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.796921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.797217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.797223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.797534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.797830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.797836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.798156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.798449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.798455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.798620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.798944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.798951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.799240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.799434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.799440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.799772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.799926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.799932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.800116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.800428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.800434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.800761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.801176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.801182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.801512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.801703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.801709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.802002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.802185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.802191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.802364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.802571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.802578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.802643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.802825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.802830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.803066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.803339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.803345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.803578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.803775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.803781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.804080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.804383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.804389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.804651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.804962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.350 [2024-05-15 11:12:49.804968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.350 qpair failed and we were unable to recover it. 00:26:53.350 [2024-05-15 11:12:49.805162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.805459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.805465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.805588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.805774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.805781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.806097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.806158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.806165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.806472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.806771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.806777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.806962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.807159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.807165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.807368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.807649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.807655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.807851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.808141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.808147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.808435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.808794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.808800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.809096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.809277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.809283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.809466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.809837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.809843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.810042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.810348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.810354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.810643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.811007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.811012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.811330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.811478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.811484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.811622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.811979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.811985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.812156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.812493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.812499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.812683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.812983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.812989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.813321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.813621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.813627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.813947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.814273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.814279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.814457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.814795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.814801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.815117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.815423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.815429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.815651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.815970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.815976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.816282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.816523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.816529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.816774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.817113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.817118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.817413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.817624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.817631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.818031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.818198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.818204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.818413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.818604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.818611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.818882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.819128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.819133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.819442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.819747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.819754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.820056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.820368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.820374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.351 qpair failed and we were unable to recover it. 00:26:53.351 [2024-05-15 11:12:49.820591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.820912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.351 [2024-05-15 11:12:49.820919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.821222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.821550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.821556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.821888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.822107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.822113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.822458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.822793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.822799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.823120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.823418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.823424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.823628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.823984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.823989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.824284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.824586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.824592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.824945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.825262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.825268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.825575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.825678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.825684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.825876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.826054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.826060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.826370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.826580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.826587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.826916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.827078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.827084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.827388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.827609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.827615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.827963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.828272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.828278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.828675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.828942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.828948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.829248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.829573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.829579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.829794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.829989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.829995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.830307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.830497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.830503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.830778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.831140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.831146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.831442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.831674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.831680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.831889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.832202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.832207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.832501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.832605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.832611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.832856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.833156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.833162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.833471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.833624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.833630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.833907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.834219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.834225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.834557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.834873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.834879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.835173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.835366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.835372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.835574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.835773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.835781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.835976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.836163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.836168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.836481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.836847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.836853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.352 qpair failed and we were unable to recover it. 00:26:53.352 [2024-05-15 11:12:49.837185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.352 [2024-05-15 11:12:49.837499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.837504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.837739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.838043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.838049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.838135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.838363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.838369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.838660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.838992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.838999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.839198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.839369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.839374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.839535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.839883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.839890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.840195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.840469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.840475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.840819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.841127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.841133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.841763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.842113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.842120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.842324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.842603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.842609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.842843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.843043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.843050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.843328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.843646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.843654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.843939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.844016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.844021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.844327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.844655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.844662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.844864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.845170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.845175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.353 qpair failed and we were unable to recover it. 00:26:53.353 [2024-05-15 11:12:49.845485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.845804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.353 [2024-05-15 11:12:49.845810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.224346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.224864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.224911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.225355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.225801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.225851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.226208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.226591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.226621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.226987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.227308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.227318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.227795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.228210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.228224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.228556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.228880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.228891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.229222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.229588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.229618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.229849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.230322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.230333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.616 [2024-05-15 11:12:50.230694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.230971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.616 [2024-05-15 11:12:50.230982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.616 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.231310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.231643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.231653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.231984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.232328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.232339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.232592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.232937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.232947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.233264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.233487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.233497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.233788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.234151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.234161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.234481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.234725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.234735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.235112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.235478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.235489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.235604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.235839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.235850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.236167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.236524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.236535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.236857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.237193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.237202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.237513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.237692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.237703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.237941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.238264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.238274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.238592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.238955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.238964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.239288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.239487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.239498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.239805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.240124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.240134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.240348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.240671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.240681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.241018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.241340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.241349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.241681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.242030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.242040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.242258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.242455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.242467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.242639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.242994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.243004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.243345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.243731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.243741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.244118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.244345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.244356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.244717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.245130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.245139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.245491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.245660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.245669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.245994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.246336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.246347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.246673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.247074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.247084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.247410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.247790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.247800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.248143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.248468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.248478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.248811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.249140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.617 [2024-05-15 11:12:50.249151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.617 qpair failed and we were unable to recover it. 00:26:53.617 [2024-05-15 11:12:50.249464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.249763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.249773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.250103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.250433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.250442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.250679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.251033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.251042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.251381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.251673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.251683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.251919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.252227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.252239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.252557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.252873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.252883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.253191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.253397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.253407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.253621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.253946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.253956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.254158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.254473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.254484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.254625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.254926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.254937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.255130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.255445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.255455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.255772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.256045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.256055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.256371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.256693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.256702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.256921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.257126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.257137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.257457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.257620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.257630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.257925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.258238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.258247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.258573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.258797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.258807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.259120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.259437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.259447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.259685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.259969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.259978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.260301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.260633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.260642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.260960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.261299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.261309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.261700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.261935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.261944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.262135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.262454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.262465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.262772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.262965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.262973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.263319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.263697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.263707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.264054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.264386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.264395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.264724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.264986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.264995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.265306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.265645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.265654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.266001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.266329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.266339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.618 qpair failed and we were unable to recover it. 00:26:53.618 [2024-05-15 11:12:50.266641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.618 [2024-05-15 11:12:50.266975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.619 [2024-05-15 11:12:50.266984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.619 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.267193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.267516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.267526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.267874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.268228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.268238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.268559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.268903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.268912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.269232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.269559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.269569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.269858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.270181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.270191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.270506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.270862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.270871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.271189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.271516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.271526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.271855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.272188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.272198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.272395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.272725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.272734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.273045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.273365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.273374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.273577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.273937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.273947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.274288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.274591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.274601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.274950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.275280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.275289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.275455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.275786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.275796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.276127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.276483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.276493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.276789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.277111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.277120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.277501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.277776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.277787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.277952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.278258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.278269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.278588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.278936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.278946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.279263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.279560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.279570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.279925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.280114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.280122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.280425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.280741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.280750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.281080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.281270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.281279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.281459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.281798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.281807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.282157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.282483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.282493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.282799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.283122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.283131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.283444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.283737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.283746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.888 [2024-05-15 11:12:50.284056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.284363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.888 [2024-05-15 11:12:50.284372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.888 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.284679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.285015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.285024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.285307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.285480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.285490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.285802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.285988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.285997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.286203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.286535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.286552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.286876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.287215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.287225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.287540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.287845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.287854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.288169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.288518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.288527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.288840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.289043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.289054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.289368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.289691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.289700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.290020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.290342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.290353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.290683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.290914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.290923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.291251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.291582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.291592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.291922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.292270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.292279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.292583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.292823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.292832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.293040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.293365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.293374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.293703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.294032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.294041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.294357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.294673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.294684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.294972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.295320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.295329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.295649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.296002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.296011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.296331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.296664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.296673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.296981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.297290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.297300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.297600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.297929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.297938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.298251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.298583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.298594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.298914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.299216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.299225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.299535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.299914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.299924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.300121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.300325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.300334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.300724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.301050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.301059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.301383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.301665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.301675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.301990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.302352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.302362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.889 qpair failed and we were unable to recover it. 00:26:53.889 [2024-05-15 11:12:50.302582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.889 [2024-05-15 11:12:50.302916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.302926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.303243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.303572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.303582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.303909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.304205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.304213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.304494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.304754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.304763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.305066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.305385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.305393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.305685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.305992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.306001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.306323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.306638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.306647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.306977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.307293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.307303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.307643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.307982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.307991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.308312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.308707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.308717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.309079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.309401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.309411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.309765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.310091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.310100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.310401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.310565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.310574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.310936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.311286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.311295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.311615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.311955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.311964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.312270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.312595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.312604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.312917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.313177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.313188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.313496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.313792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.313801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.314113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.314434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.314444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.314798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.315116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.315125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.315429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.315758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.315768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.316050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.316386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.316395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.316718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.317022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.317031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.317220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.317554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.317564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.317745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.318018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.318027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.318312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.318516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.318525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.318885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.319180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.319190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.319409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.319763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.319773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.320103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.320464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.320473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.890 [2024-05-15 11:12:50.320934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.321284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.890 [2024-05-15 11:12:50.321293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.890 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.321612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.321875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.321883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.322183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.322504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.322513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.322802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.323116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.323126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.323440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.323779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.323788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.324109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.324391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.324400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.324703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.325003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.325012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.325218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.325532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.325542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.325828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.326148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.326157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.326466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.326781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.326791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.327128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.327469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.327479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.327791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.328140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.328148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.328465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.328796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.328806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.329119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.329384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.329394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.329681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.330012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.330020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.330350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.330701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.330711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.331051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.331374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.331383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.331685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.332022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.332031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.332334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.332497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.332507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.332813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.333133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.333141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.333459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.333737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.333746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.334075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.334418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.334427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.334757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.335086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.335095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.335418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.335716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.335725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.336058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.336403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.336414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.336717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.337042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.337050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.337353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.337658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.337667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.338035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.338348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.338358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.338715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.339038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.339047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.339360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.339667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.891 [2024-05-15 11:12:50.339676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.891 qpair failed and we were unable to recover it. 00:26:53.891 [2024-05-15 11:12:50.340017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.340318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.340328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.340641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.340966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.340975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.341187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.341460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.341469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.341860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.342170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.342180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.342478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.342804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.342814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.343112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.343410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.343419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.343757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.344074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.344083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.344366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.344709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.344718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.345029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.345328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.345337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.345631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.345949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.345958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.346269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.346464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.346473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.346785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.347096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.347105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.347302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.347570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.347580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.347914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.348091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.348100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.348367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.348687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.348698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.349024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.349348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.349356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.349536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.349837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.349847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.350140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.350462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.350471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.350751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.351079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.351089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.351401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.351745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.351755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.352090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.352415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.352425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.352795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.353095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.353103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.892 qpair failed and we were unable to recover it. 00:26:53.892 [2024-05-15 11:12:50.353415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.892 [2024-05-15 11:12:50.353717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.353727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.354045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.354365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.354374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.354679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.354988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.354998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.355286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.355472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.355481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.355759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.356078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.356087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.356290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.356614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.356623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.356922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.357239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.357248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.357543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.357716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.357725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.358038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.358386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.358395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.358709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.359015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.359024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.359315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.359632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.359641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.359959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.360273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.360283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.360490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.360790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.360800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.361116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.361410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.361420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.361731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.361956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.361964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.362256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.362582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.362591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.362786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.363087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.363096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.363421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.363715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.363724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.364035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.364234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.364243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.364564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.364836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.364845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.365121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.365407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.365416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.365642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.365946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.365955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.366250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.366566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.366575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.366754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.367082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.367090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.367274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.367612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.367621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.367942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.368204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.368214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.368549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.368838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.368847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.369144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.369311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.369320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.369657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.369968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.369976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.370180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.370490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.893 [2024-05-15 11:12:50.370498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.893 qpair failed and we were unable to recover it. 00:26:53.893 [2024-05-15 11:12:50.370816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.371157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.371166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.371554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.371829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.371838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.372145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.372464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.372474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.372794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.373093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.373101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.373402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.373810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.373819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.374113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.374301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.374310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.374582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.374919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.374928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.375245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.375564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.375574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.375892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.376207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.376215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.376524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.376830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.376839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.377139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.377455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.377463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.377707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.378041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.378050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.378257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.378429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.378438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.378760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.379060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.379069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.379384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.379706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.379715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.380079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.380382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.380392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.380713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.381032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.381041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.381339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.381660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.381669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.381983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.382310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.382321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.382725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.383055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.383063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.383206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.383539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.383553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.383835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.384149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.384158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.384452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.384757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.384766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.385072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.385404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.385412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.385724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.386065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.386074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.386399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.386658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.386668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.386950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.387178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.387186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.387505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.387828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.387837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.894 [2024-05-15 11:12:50.388128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.388452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.894 [2024-05-15 11:12:50.388460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.894 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.388732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.389054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.389062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.389378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.389682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.389690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.390021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.390320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.390330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.390665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.390985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.390994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.391286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.391594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.391603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.391915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.392233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.392242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.392525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.392817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.392825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.393005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.393312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.393320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.393627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.393970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.393978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.394291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.394607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.394616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.394926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.395244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.395253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.395554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.395842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.395850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.396127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.396440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.396448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.396760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.396802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.396810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.397094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.397433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.397443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.397783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.398101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.398111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.398297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.398611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.398619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.398852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.399167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.399176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.399486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.399776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.399785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.400080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.400283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.400291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.400624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.400943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.400951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.401252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.401568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.401577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.401800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.402139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.402148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.402476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.402781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.402791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.403090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.403410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.403419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.403723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.404072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.404080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.404390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.404686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.404695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.405004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.405324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.405333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.405617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.405952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.405962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.895 qpair failed and we were unable to recover it. 00:26:53.895 [2024-05-15 11:12:50.406170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.895 [2024-05-15 11:12:50.406479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.406489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.406801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.407156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.407165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.407461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.407619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.407628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.407945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.408136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.408145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.408461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.408788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.408797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.409098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.409419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.409428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.409732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.410103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.410111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.410453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.410800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.410810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.411100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.411329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.411338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.411645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.411986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.411995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.412272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.412570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.412578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.412802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.413112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.413121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.413435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.413733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.413742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.414064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.414383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.414392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.414700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.415017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.415026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.415336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.415648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.415657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.415972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.416303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.416312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.416627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.416946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.416955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.417249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.417578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.417587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.417759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.418077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.418086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.418393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.418717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.418726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.419034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.419364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.419373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.419675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.419979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.419988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.420296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.420609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.420618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.420931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.421230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.421239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.421443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.421706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.421715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.421997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.422298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.422307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.422631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.422882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.422891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.896 qpair failed and we were unable to recover it. 00:26:53.896 [2024-05-15 11:12:50.423189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.423509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.896 [2024-05-15 11:12:50.423517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.423816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.423987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.423996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.424332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.424658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.424667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.424985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.425311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.425322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.425633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.425965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.425974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.426287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.426611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.426620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.426912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.427230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.427238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.427531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.427732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.427742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.428045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.428367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.428376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.428678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.428986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.428995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.429292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.429629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.429637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.429976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.430314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.430323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.430665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.430974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.430983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.431261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.431583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.431592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.431904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.432197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.432206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.432536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.432827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.432835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.433142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.433314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.433323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.433651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.433968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.433979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.434269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.434560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.434569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.434844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.435156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.435164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.435474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.435789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.435798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.436103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.436407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.436415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.436711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.437039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.437047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.437370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.437689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.437698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.438053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.438348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.438356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.438663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.438970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.438978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.439288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.439639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.897 [2024-05-15 11:12:50.439648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.897 qpair failed and we were unable to recover it. 00:26:53.897 [2024-05-15 11:12:50.439948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.440263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.440274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.440596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.440916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.440925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.441233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.441556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.441564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.441869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.442187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.442195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.442389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.442658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.442666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.442979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.443301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.443309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.443620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.443947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.443954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.444253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.444506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.444514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.444842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.445200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.445207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.445515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.445687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.445696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.446006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.446174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.446185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.446519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.446850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.446859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.447168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.447482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.447490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.447800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.448120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.448129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.448430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.448752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.448761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.448979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.449254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.449262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.449625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.449939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.449947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.450253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.450420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.450428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.450704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.451006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.451014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.451359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.451677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.451687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.451992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.452298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.452308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.452620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.452790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.452799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.453104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.453408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.453415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.453708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.454022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.454029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.454195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.454478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.454487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.454668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.454964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.454973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.455282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.455610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.455618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.455981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.456203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.456211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.456534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.456822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.456830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.898 qpair failed and we were unable to recover it. 00:26:53.898 [2024-05-15 11:12:50.457140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.457456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.898 [2024-05-15 11:12:50.457464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.457814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.458111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.458119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.458445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.458781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.458789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.459081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.459381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.459389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.459703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.460019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.460026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.460334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.460667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.460676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.460943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.461258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.461266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.461577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.461896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.461905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.462212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.462528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.462537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.462734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.463060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.463068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.463380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.463698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.463707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.463887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.464131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.464139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.464459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.464751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.464759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.465064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.465379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.465387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.465677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.465990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.465998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.466292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.466597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.466606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.466924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.467222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.467230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.467604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.467860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.467868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.468158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.468474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.468482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.468791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.468949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.468958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.469135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.469489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.469497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.469809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.470129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.470137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.470466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.470750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.470758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.471064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.471383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.471391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.471711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.471892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.471901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.472207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.472392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.472400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.472701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.473041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.473048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.473385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.473701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.473710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.474017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.474221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.474228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.474549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.474809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.474816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.899 qpair failed and we were unable to recover it. 00:26:53.899 [2024-05-15 11:12:50.475142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.899 [2024-05-15 11:12:50.475453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.475462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.475772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.476095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.476103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.476407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.476710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.476718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.477021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.477354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.477362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.477630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.477932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.477939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.478263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.478611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.478620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.478948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.479268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.479276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.479586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.479912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.479919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.480218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.480512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.480520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.480597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.480806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.480813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.481011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.481291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.481300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.481615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.481936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.481944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.482100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.482272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.482281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.482561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.482974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.482981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.483280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.483583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.483591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.483901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.484098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.484106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.484496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.484787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.484795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.485122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.485311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.485319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.485655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.485878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.485885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.486196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.486531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.486538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.486727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.487024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.487033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.487322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.487608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.487616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.487943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.488261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.488268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.488578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.488910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.488918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.489211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.489508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.489515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.489847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.490165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.490172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.490478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.490747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.490754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.491043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.491347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.491354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.491559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.491853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.491861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.900 [2024-05-15 11:12:50.492160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.492457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.900 [2024-05-15 11:12:50.492465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.900 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.492758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.493050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.493057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.493358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.493683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.493690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.493903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.494171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.494179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.494511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.494859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.494867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.495061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.495348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.495357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.495653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.495848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.495856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.496213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.496531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.496539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.496846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.497171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.497183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.497486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.497699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.497708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.498028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.498352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.498361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.498667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.499003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.499010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.499336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.499528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.499535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.499876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.500215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.500222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.500429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.500768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.500776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.501103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.501391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.501399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.501702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.501882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.501890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.502216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.502521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.502529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.502742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.502946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.502954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.503257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.503559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.503569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.503922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.504066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.504073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.504391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.504683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.504690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.504954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.505288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.505295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.505588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.505929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.505936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.506287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.506445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.506452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.901 qpair failed and we were unable to recover it. 00:26:53.901 [2024-05-15 11:12:50.506777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.901 [2024-05-15 11:12:50.507080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.507088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.507384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.507734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.507743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.508039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.508375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.508382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.508704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.508912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.508920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.509201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.509519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.509527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.509716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.510079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.510087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.510397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.510685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.510692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.510999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.511323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.511330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.511620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.511798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.511806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.512125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.512385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.512393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.512615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.512909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.512917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.513233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.513528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.513536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.513730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.514050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.514057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.514362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.514683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.514691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.515006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.515319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.515326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.515622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.515914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.515922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.516199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.516520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.516528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.516707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.517040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.517047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.517333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.517520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.517528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.517855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.518100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.518107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.518315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.518645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.518653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.518962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.519278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.519286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.519593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.519908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.519915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.520201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.520523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.520531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.520818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.521132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.521140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.521458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.521744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.521751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.522093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.522424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.522433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.522759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.523067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.523076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.523389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.523721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.523730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.902 qpair failed and we were unable to recover it. 00:26:53.902 [2024-05-15 11:12:50.524035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.902 [2024-05-15 11:12:50.524350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.524357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.524543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.524765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.524772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.525105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.525425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.525433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.525597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.525859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.525866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.526068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.526369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.526377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.526670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.526962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.526970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.527131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.527429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.527437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.527568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.527848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.527856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.528212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.528564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.528573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.528884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.529171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.529181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.529358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.529649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.529656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.529969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.530267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.530275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.530445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.530721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.903 [2024-05-15 11:12:50.530729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:53.903 qpair failed and we were unable to recover it. 00:26:53.903 [2024-05-15 11:12:50.531058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.531375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.531382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.531672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.531949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.531957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.532281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.532610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.532618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.532832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.533015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.533025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.533318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.533562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.533572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.533903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.534229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.534236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.534542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.534926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.534935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.535241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.535558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.535566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.535850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.536165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.536173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.536463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.536612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.536621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.536892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.537212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.537219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.537532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.537856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.537864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.538154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.538473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.538480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.538814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.539036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.539044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.539341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.539661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.539669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.539981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.540296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.540304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.540626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.540912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.540922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.541208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.541527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.541534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.541829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.542146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.542154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.542333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.542658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.542666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.543000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.543317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.543325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.543624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.543967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.543975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.544285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.544555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.544564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.544857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.545202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.545210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.545543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.545835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.545843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.546005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.546294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.546301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.176 qpair failed and we were unable to recover it. 00:26:54.176 [2024-05-15 11:12:50.546600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.176 [2024-05-15 11:12:50.546896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.546904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.547231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.547552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.547559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.547894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.548227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.548234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.548561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.548821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.548829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.549127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.549442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.549450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.549642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.549908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.549916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.550222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.550535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.550543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.550855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.551187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.551196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.551502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.551795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.551803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.552117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.552430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.552438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.552807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.553142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.553151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.553478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.553792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.553801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.554075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.554371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.554379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.554692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.555005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.555012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.555316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.555627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.555635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.555947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.556279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.556287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.556592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.556850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.556857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.557135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.557446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.557453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.557752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.558036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.558043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.558381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.558679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.558687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.558998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.559257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.559265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.559572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.559904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.559911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.560237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.560522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.560531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.560829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.561146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.561154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.561464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.561722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.561730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.562044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.562366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.562374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.562673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.562973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.562980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.563271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.563463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.563471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.177 qpair failed and we were unable to recover it. 00:26:54.177 [2024-05-15 11:12:50.563803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.177 [2024-05-15 11:12:50.564134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.564141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.564488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.564780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.564788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.564957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.565290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.565297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.565593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.565880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.565887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.566190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.566513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.566521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.566888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.567177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.567185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.567489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.567818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.567826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.568168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.568491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.568500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.568725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.568986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.568995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.569381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.569687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.569696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.570013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.570207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.570214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.570514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.570844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.570852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.571158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.571446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.571453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.571777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.572083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.572091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.572445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.572707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.572717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.573031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.573328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.573336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.573613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.573939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.573947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.574251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.574566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.574575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.574890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.575100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.575107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.575411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.575663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.575671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.576022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.576332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.576339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.576646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.576968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.576975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.577281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.577615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.577623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.577930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.578225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.578233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.578534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.578838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.578846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.579170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.579483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.579491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.579799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.580096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.580104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.580438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.580712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.580719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.178 qpair failed and we were unable to recover it. 00:26:54.178 [2024-05-15 11:12:50.581030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.178 [2024-05-15 11:12:50.581351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.581358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.581653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.581929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.581937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.582231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.582525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.582532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.582853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.583186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.583193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.583502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.583782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.583791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.583831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.584139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.584148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.584501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.584823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.584831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.585121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.585425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.585432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.585709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.586031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.586038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.586344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.586657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.586665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.586986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.587294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.587302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.587480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.587762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.587770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.588054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.588366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.588374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.588689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.589016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.589024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.589319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.589616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.589624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.589907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.590105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.590112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.590422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.590734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.590742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.591064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.591250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.591256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.591562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.591848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.591857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.592144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.592498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.592507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.592817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.593133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.593141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.593428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.593727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.593735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.594097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.594411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.594418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.594581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.594880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.594888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.595182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.595521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.595528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.595824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.596148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.596157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.596459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.596785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.596793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.597116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.597403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.597411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.597671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.597971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.597980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.179 qpair failed and we were unable to recover it. 00:26:54.179 [2024-05-15 11:12:50.598297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.179 [2024-05-15 11:12:50.598610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.598619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.598908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.599223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.599231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.599539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.599827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.599835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.600146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.600471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.600479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.600782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.601446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.601462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.601624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.601897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.601905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.602196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.602514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.602522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.602845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.603166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.603174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.603481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.603732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.603740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.603910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.604254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.604261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.604584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.604906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.604914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.605225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.605512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.605521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.605873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.606143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.606152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.606455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.606830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.606838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.607122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.607434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.607442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.607766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.608082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.608090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.608264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.608442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.608450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.608720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.608935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.608943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.609229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.609549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.609557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.609833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.610147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.610154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.610467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.610772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.610780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.611071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.611401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.611409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.611733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.612046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.612053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.612361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.612689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.612697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.180 qpair failed and we were unable to recover it. 00:26:54.180 [2024-05-15 11:12:50.613016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.613335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.180 [2024-05-15 11:12:50.613343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.613648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.613966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.613974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.614326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.614658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.614666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.615048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.615267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.615274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.615581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.615796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.615803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.616082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.616356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.616363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.616555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.616834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.616842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.617141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.617342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.617349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.617656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.617972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.617979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.618174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.618463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.618471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.618779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.619098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.619106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.619413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.619709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.619716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.620031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.620319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.620328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.620519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.620810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.620818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.621132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.621460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.621468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.621778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.622062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.622069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.622374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.622652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.622659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.622830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.623153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.623161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.623448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.623648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.623655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.623965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.624278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.624286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.624610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.624923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.624931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.625243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.625555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.625563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.625831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.626142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.626151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.626312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.626610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.626618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.626930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.627252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.627260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.627573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.627891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.627899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.628188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.628501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.628508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.628820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.629141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.629148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.629454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.629654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.629661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.181 qpair failed and we were unable to recover it. 00:26:54.181 [2024-05-15 11:12:50.629844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.181 [2024-05-15 11:12:50.630167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.630175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.630456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.630754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.630761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.631079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.631307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.631315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.631652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.631981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.631990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.632298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.632551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.632559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.632898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.633184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.633192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.633509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.633693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.633700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.634011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.634340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.634347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.634652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.634941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.634949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.635265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.635557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.635565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.635866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.636187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.636194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.636499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.636809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.636816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.637121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.637283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.637291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.637590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.637899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.637909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.638212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.638528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.638535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.638743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.639003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.639010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.639334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.639667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.639675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.639965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.640308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.640315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.640623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.640943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.640951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.641254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.641574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.641582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.641760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.642029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.642036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.642325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.642607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.642615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.642933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.643240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.643249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.643516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.643823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.643831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.644143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.644462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.644470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.644823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.645140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.645148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.645464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.645742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.645749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.646073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.646386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.646393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.646697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.647026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.647033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.182 [2024-05-15 11:12:50.647357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.647651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.182 [2024-05-15 11:12:50.647660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.182 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.647986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.648295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.648303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.648614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.648808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.648817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.649065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.649313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.649320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.649643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.649934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.649942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.650253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.650565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.650573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.650876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.651194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.651201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.651486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.651747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.651755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.652063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.652312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.652319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.652534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.652820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.652828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.653101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.653434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.653442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.653766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.654079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.654087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.654365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.654635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.654642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.654948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.655293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.655300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.655598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.655933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.655940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.656238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.656561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.656570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.656873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.657150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.657158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.657470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.657794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.657802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.658112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.658398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.658405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.658713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.659018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.659026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.659353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.659661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.659669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.659938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.660259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.660266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.660569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.660877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.660884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.661188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.661501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.661509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.661865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.662193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.662202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.662508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.662802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.662809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.663113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.663316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.663324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.663608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.664011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.664018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.183 qpair failed and we were unable to recover it. 00:26:54.183 [2024-05-15 11:12:50.664340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.664646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.183 [2024-05-15 11:12:50.664653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.664956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.665267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.665275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.665580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.665888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.665896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.666179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.666476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.666483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.666654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.666987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.666995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.667301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.667649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.667657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.667922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.668210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.668217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.668531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.668782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.668789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.669078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.669399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.669407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.669712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.670016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.670023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.670320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.670609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.670616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.670930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.671240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.671248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.671561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.671877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.671885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.672193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.672397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.672404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.672709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.672884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.672892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.673232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.673565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.673573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.673836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.674147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.674153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.674456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.674774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.674781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.675090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.675371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.675379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.675678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.675949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.675956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.676258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.676568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.676575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.676879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.677080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.677087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.677391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.677681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.677689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.677902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.678173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.678180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.678511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.678681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.678688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.678999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.679292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.679299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.679614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.679934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.679942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.680250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.680549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.680558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.680842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.681152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.681160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.681469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.681743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.681751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.184 qpair failed and we were unable to recover it. 00:26:54.184 [2024-05-15 11:12:50.682049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.184 [2024-05-15 11:12:50.682361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.682368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.682681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.682996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.683004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.683327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.683656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.683664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.683965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.684275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.684282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.684593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.684865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.684872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.685038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.685367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.685375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.685705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.686025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.686033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.686333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.686525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.686532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.686916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.687218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.687226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.687434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.687726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.687734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.687923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.688192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.688199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.688507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.688824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.688832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.689120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.689396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.689403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.689709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.690033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.690041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.690325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.690605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.690613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.690899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.691202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.691211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.691515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.691833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.691840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.692141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.692454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.692462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.692768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.693098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.693106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.693445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.693733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.693741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.694058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.694297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.694304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.694601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.694868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.694876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.695165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.695474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.695481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.185 qpair failed and we were unable to recover it. 00:26:54.185 [2024-05-15 11:12:50.695791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.185 [2024-05-15 11:12:50.696120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.696127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.696422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.696732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.696740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.696994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.697129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.697136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.697412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.697712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.697720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.698047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.698379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.698387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.698693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.699012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.699020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.699324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.699635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.699642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.699938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.700265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.700273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.700450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.700741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.700749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.701062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.701363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.701370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.701682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.702000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.702007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.702328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.702497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.702505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.702709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.703043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.703050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.703397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.703681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.703689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.703883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.704201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.704209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.704537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.704869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.704877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.705177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.705496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.705503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.705811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.706124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.706132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.706404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.706673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.706680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.707009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.707293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.707300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.707617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.707911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.707918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.708219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.708495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.708502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.708813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.709129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.709136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.709294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.709564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.709571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.709901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.710212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.710220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.710519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.710737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.710744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.710958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.711286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.186 [2024-05-15 11:12:50.711294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.186 qpair failed and we were unable to recover it. 00:26:54.186 [2024-05-15 11:12:50.711615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.711919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.711927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.712128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.712476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.712484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.712786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.713129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.713137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.713430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.713707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.713714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.714012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.714329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.714337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.714632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.714970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.714977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.715295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.715534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.715541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.715728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.716044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.716053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.716383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.716710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.716718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.717005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.717319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.717326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.717655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.718012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.718020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.718319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.718611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.718619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.718813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.719104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.719112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.719279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.719593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.719601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.719913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.720156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.720164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.720468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.720733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.720741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.721029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.721185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.721193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.721457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.721649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.721658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.721967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.722282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.722290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.722591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.722923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.722930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.723221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.723551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.723559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.723840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.724160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.724167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.724473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.724765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.724773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.724968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.725294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.725301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.725600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.725874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.725881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.726187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.726493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.726500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.726815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.727141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.727148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.727455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.727774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.727784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.727977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.728304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.187 [2024-05-15 11:12:50.728311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.187 qpair failed and we were unable to recover it. 00:26:54.187 [2024-05-15 11:12:50.728607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.728929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.728937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.729244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.729406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.729414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.729691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.729997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.730004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.730225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.730529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.730538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.730845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.731162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.731170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.731489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.731840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.731849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.732184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.732518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.732527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.732839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.733158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.733166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.733453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.733762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.733772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.734088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.734345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.734354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.734653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.734950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.734958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.735241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.735549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.735557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.735874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.736234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.736242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.736532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.736866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.736874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.737151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.737470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.737477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.737786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.738065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.738073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.738380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.738701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.738708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.739022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.739218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.739225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.739542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.739748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.739755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.740113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.740424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.740432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.740759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.741092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.741100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.741402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.741709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.741717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.742022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.742182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.742189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.742362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.742676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.742683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.743012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.743286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.743294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.743704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.744010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.744017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.744320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.744636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.744644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.744857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.745174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.745181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.188 qpair failed and we were unable to recover it. 00:26:54.188 [2024-05-15 11:12:50.745483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.188 [2024-05-15 11:12:50.745677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.745686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.746004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.746216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.746223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.746524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.746842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.746849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.747142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.747326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.747334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.747651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 508874 Killed "${NVMF_APP[@]}" "$@" 00:26:54.189 [2024-05-15 11:12:50.747975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.747984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.748280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.748463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.748471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.748731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 11:12:50 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:54.189 [2024-05-15 11:12:50.749023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.749032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 11:12:50 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:54.189 [2024-05-15 11:12:50.749338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 11:12:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:54.189 [2024-05-15 11:12:50.749553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.749561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 11:12:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:54.189 11:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:54.189 [2024-05-15 11:12:50.749904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.750244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.750252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.750576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.750912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.750920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.751106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.751416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.751424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.751715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.752017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.752024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.752328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.752620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.752627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.752939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.753256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.753264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.753571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.753885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.753893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.754197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.754520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.754529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.754708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.754878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.754886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.755100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.755407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.755416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.755717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.755989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.755997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 [2024-05-15 11:12:50.756292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.756625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.756633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 11:12:50 -- nvmf/common.sh@470 -- # nvmfpid=509904 00:26:54.189 [2024-05-15 11:12:50.756840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.757020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.757028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 11:12:50 -- nvmf/common.sh@471 -- # waitforlisten 509904 00:26:54.189 [2024-05-15 11:12:50.757339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 11:12:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:54.189 11:12:50 -- common/autotest_common.sh@827 -- # '[' -z 509904 ']' 00:26:54.189 [2024-05-15 11:12:50.757615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.189 [2024-05-15 11:12:50.757623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.189 qpair failed and we were unable to recover it. 00:26:54.189 11:12:50 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.190 [2024-05-15 11:12:50.757905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 11:12:50 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:54.190 11:12:50 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.190 [2024-05-15 11:12:50.758224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.758232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 11:12:50 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:54.190 [2024-05-15 11:12:50.758525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 11:12:50 -- common/autotest_common.sh@10 -- # set +x 00:26:54.190 [2024-05-15 11:12:50.758713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.758721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.758771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.758997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.759006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.759329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.759695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.759703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.759893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.760075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.760083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.760270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.760535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.760543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.760857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.761184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.761192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.761439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.761744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.761752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.761973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.762267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.762275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.762610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.762960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.762968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.763280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.763612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.763620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.763929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.764261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.764269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.764574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.764873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.764882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.765087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.765426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.765435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.765843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.766132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.766140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.766455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.766658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.766666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.766991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.767291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.767300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.767580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.767919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.767927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.768237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.768553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.768561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.768878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.769182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.769190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.769259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.769526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.769534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.769836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.770129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.770137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.770308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.770601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.770609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.770817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.771103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.771111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.771458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.771807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.771814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.772180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.772506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.772514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.772828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.773170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.773178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.773241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.773307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.773314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.190 qpair failed and we were unable to recover it. 00:26:54.190 [2024-05-15 11:12:50.773607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.190 [2024-05-15 11:12:50.773776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.773784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.774062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.774389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.774397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.774598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.774981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.774990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.775167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.775503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.775510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.775880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.776144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.776151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.776500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.776818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.776826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.776913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.777155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.777163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.777343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.777677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.777686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.778020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.778342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.778349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.778653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.779041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.779048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.779354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.779583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.779591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.779775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.779973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.779981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.780157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.780446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.780453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.780760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.781085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.781093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.781404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.781706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.781714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.782028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.782348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.782355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.782660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.782987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.782995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.783187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.783369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.783376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.783659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.783725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.783732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.784060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.784338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.784345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.784517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.784604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.784611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.784967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.785264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.785271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.785523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.785660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.785667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.785846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.786104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.786112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.786271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.786597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.786605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.786917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.787241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.787248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.787435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.787611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.787620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.787951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.788151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.788158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.788476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.788752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.788760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.191 qpair failed and we were unable to recover it. 00:26:54.191 [2024-05-15 11:12:50.789110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.191 [2024-05-15 11:12:50.789323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.789331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.789628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.789947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.789954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.790265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.790460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.790467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.790774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.791071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.791079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.791378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.791674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.791682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.791847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.792061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.792068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.792254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.792596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.792604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.792796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.793090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.793097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.793421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.793776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.793783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.793957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.794156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.794163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.794397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.794664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.794671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.794856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.795185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.795193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.795482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.795646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.795653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.795880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.796154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.796161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.796470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.796754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.796762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.797072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.797246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.797253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.797587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.797967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.797974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.798136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.798402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.798409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.798717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.799013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.799020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.799311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.799617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.799625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.799938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.800263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.800271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.800447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.800810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.800817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.800990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.801301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.801308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.801625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.801816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.801824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.802135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.802453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.802460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.802782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.802991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.802998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.803308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.803604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.803611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.803896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.804199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.804207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.804540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.804840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.804848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.805173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.805511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.192 [2024-05-15 11:12:50.805519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.192 qpair failed and we were unable to recover it. 00:26:54.192 [2024-05-15 11:12:50.805789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.806119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.806126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.806306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.806570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.806579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.806762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.806916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.806923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.807195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.807412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.807418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.807632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.807907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.807914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.808224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.808238] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:26:54.193 [2024-05-15 11:12:50.808282] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.193 [2024-05-15 11:12:50.808397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.808405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.808623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.808895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.808902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.809271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.809597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.809605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.809898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.810235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.810245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.810296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.810470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.810478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.810839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.811211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.811219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.811503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.811771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.811780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.812053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.812345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.812353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.812641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.812932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.812941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.813254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.813416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.813424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.813708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.814017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.814026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.814360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.814670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.814679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.815023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.815369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.815378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.815664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.815988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.193 [2024-05-15 11:12:50.815998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.193 qpair failed and we were unable to recover it. 00:26:54.193 [2024-05-15 11:12:50.816166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.816450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.816458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.816766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.817099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.817107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.817447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.817641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.817649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.817967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.818159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.818167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.818240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.818659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.818667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.819010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.819307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.819315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.819659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.819842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.819850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.820149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.820331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.820339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.820672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.820828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.464 [2024-05-15 11:12:50.820836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.464 qpair failed and we were unable to recover it. 00:26:54.464 [2024-05-15 11:12:50.821157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.821493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.821503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.821657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.821923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.821931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.822140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.822429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.822436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.822634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.822916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.822924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.823124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.823461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.823470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.823767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.824094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.824102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.824404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.824714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.824722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.825079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.825408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.825416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.825580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.825879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.825887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.826201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.826486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.826494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.826811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.827139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.827147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.827466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.827761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.827769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.828088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.828249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.828256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.828439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.828711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.828718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.829045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.829378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.829385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.829708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.830054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.830062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.830247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.830425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.830432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.830622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.830926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.830934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.831130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.831408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.831416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.831713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.832009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.832016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.832329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.832533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.832540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.832901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.833153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.833161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.833504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.833824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.833831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.834138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.834487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.834494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.834674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.834972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.834980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.835157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.835440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.835448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.835807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.836137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.836146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.836433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.836750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.836758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.837108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.837444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.837451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.465 qpair failed and we were unable to recover it. 00:26:54.465 [2024-05-15 11:12:50.837695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.465 [2024-05-15 11:12:50.838009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.838016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.838334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.838657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.838665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.838972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.839158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.839165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.466 [2024-05-15 11:12:50.839475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.839706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.839715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.840057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.840259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.840266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.840620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.840948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.840956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.841134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.841459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.841467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.841809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.842145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.842152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.842446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.842813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.842820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.843179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.843480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.843488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.843570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.843872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.843880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.844207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.844544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.844554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.844875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.845198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.845206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.845530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.845845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.845853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.846068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.846331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.846338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.846534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.846830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.846838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.847157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.847336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.847343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.847645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.847948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.847956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.848245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.848563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.848571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.848888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.849155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.849162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.849483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.849777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.849785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.850085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.850282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.850289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.850564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.850852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.850859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.851149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.851440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.851448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.851757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.852058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.852066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.852387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.852557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.852566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.852863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.853081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.853090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.853428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.853721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.853728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.854054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.854393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.854401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.466 [2024-05-15 11:12:50.854715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.855039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.466 [2024-05-15 11:12:50.855047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.466 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.855351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.855677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.855685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.855871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.856218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.856225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.856537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.856849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.856856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.857171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.857498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.857506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.857685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.857761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.857768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.858071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.858410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.858417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.858715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.859104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.859111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.859406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.859720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.859728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.859932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.860213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.860221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.860533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.860882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.860889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.861080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.861296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.861304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.861624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.861946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.861954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.862271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.862560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.862568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.862821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.862980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.862987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.863299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.863642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.863650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.863978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.864158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.864165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.864362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.864638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.864645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.864818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.865141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.865148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.865460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.865659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.865667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.866008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.866199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.866207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.866539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.866840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.866848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.867169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.867467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.867475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.867807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.868145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.868153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.868442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.868747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.868754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.869047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.869363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.869370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.869697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.870044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.870051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.870343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.870664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.870671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.870986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.871347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.871354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.871516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.871801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.871808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.467 qpair failed and we were unable to recover it. 00:26:54.467 [2024-05-15 11:12:50.872134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.467 [2024-05-15 11:12:50.872461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.872468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.872785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.872988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.872995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.873340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.873531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.873538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.873859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.874180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.874189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.874500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.874812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.874819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.875132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.875465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.875472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.875646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.875917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.875925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.876252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.876577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.876585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.876899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.877227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.877235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.877537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.877824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.877832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.878163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.878331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.878338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.878525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.878883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.878891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.879074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.879322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.879329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.879676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.879959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.879967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.880262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.880585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.880593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.880911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.881158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.881166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.881345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.881510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.881518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.881691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.882011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.882018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.882337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.882561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.882569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.882789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.883061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.883069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.883352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.883697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.883705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.883894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.884070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.884077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.884261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.884468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.884475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.884661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.884957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.884964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.885309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.885625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.885632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.885959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.886298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.886305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.886615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.886902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.886909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.887093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.887272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.887280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.887463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.887608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.887616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.887914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.888205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.468 [2024-05-15 11:12:50.888213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.468 qpair failed and we were unable to recover it. 00:26:54.468 [2024-05-15 11:12:50.888393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.888663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.888671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.888856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.889132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.889139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.889470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.889758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.889767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.889987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.890306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.890314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.890371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.469 [2024-05-15 11:12:50.890502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.890859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.890867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.891216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.891392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.891400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.891683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.891998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.892006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.892364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.892558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.892567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.892905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.893230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.893237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.893551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.893897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.893904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.894226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.894598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.894606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.895010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.895345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.895353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.895533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.895788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.895796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.896135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.896207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.896215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.896585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.896940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.896948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.897255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.897431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.897438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.897823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.898155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.898163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.898491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.898608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.898614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.898807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.899110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.899117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.899296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.899626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.899634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.899939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.900257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.900264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.900557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.900878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.900886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.901197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.901531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.901539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.469 [2024-05-15 11:12:50.901855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.902163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.469 [2024-05-15 11:12:50.902170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.469 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.902329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.902480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.902487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.902831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.903176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.903184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.903502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.903674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.903682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.904017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.904205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.904213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.904518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.904814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.904822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.905141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.905435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.905442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.905575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.905872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.905880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.906202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.906427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.906435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.906637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.906936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.906944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.907251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.907552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.907560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.907853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.908059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.908067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.908296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.908628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.908636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.908938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.909259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.909267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.909574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.909925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.909933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.910239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.910432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.910440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.910783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.911093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.911101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.911391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.911570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.911578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.911899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.912210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.912217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.912482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.912796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.912804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.912987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.913272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.913279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.913619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.913887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.913895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.914215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.914410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.914417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.914746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.915050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.915057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.915361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.915696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.915703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.915958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.916281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.916289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.916494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.916795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.916803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.917114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.917448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.917455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.917771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.918081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.918088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.918418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.918600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.918607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.470 qpair failed and we were unable to recover it. 00:26:54.470 [2024-05-15 11:12:50.918750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.470 [2024-05-15 11:12:50.919015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.919024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.919333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.919654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.919662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.919852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.920140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.920147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.920484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.920797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.920804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.921114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.921445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.921454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.921670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.921858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.921866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.922168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.922503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.922511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.922821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.923009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.923017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.923339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.923613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.923622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.923902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.924214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.924223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.924535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.924847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.924858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.925135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.925446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.925454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.925768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.926009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.926017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.926320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.926653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.926661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.926994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.927327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.927335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.927620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.927826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.927834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.928161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.928474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.928483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.928819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.929153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.929162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.929467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.929731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.929739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.930055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.930390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.930397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.930688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.931029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.931038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.931342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.931505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.931513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.931827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.932152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.932160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.932483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.932863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.932871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.933161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.933474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.933481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.933769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.934097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.934105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.934406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.934698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.934705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.934952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.935109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.935116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.935430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.935711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.935718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.936031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.936355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.471 [2024-05-15 11:12:50.936364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.471 qpair failed and we were unable to recover it. 00:26:54.471 [2024-05-15 11:12:50.936685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.936736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.936744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.937039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.937328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.937336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.937610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.937938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.937946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.938075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.938354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.938362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.938660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.938959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.938966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.939292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.939469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.939476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.939781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.940120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.940127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.940443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.940829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.940837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.941140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.941428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.941435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.941755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.941900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.941908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.942277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.942566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.942576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.942872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.943200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.943207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.943508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.943695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.943703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.944014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.944336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.944343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.944659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.944982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.944990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.945313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.945482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.945489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.945801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.946117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.946125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.946488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.946780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.946788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.947092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.947416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.947424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.947760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.947911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.947918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.948208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.948533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.948540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.948829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.949123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.949131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.949319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.949617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.949625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.949943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.950261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.950269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.950562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.950872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.950880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.951208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.951507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.951515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.951827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.952118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.952126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.952392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.952710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.952718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.953020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.953310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.953318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.472 [2024-05-15 11:12:50.953610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.953785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.472 [2024-05-15 11:12:50.953793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.472 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.954062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.954175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.473 [2024-05-15 11:12:50.954203] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-05-15 11:12:50.954205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 at runtime. 00:26:54.473 [2024-05-15 11:12:50.954215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b9[2024-05-15 11:12:50.954215] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:26:54.473 only 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.954224] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.473 [2024-05-15 11:12:50.954230] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.473 [2024-05-15 11:12:50.954417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:54.473 [2024-05-15 11:12:50.954540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.954559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:54.473 [2024-05-15 11:12:50.954710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:54.473 [2024-05-15 11:12:50.954818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:54.473 [2024-05-15 11:12:50.954840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.954847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.954926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.955238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.955245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.955548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.955755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.955763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.956044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.956292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.956300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.956626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.956806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.956813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.957102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.957326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.957335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.957505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.957797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.957806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.958109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.958365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.958373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.958461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.958755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.958763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.959064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.959382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.959389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.959576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.959763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.959772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.960060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.960379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.960387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.960687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.960973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.960980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.961338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.961552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.961560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.961843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.962171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.962179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.962481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.962780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.962788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.963024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.963361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.963369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.963682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.963888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.963897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.964193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.964479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.964487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.964799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.965136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.965144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.965454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.965658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.965666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.473 [2024-05-15 11:12:50.965976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.966287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.473 [2024-05-15 11:12:50.966294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.473 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.966612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.966952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.966959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.967126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.967385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.967392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.967708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.968021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.968029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.968341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.968658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.968666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.968962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.969277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.969286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.969476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.969801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.969812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.969998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.970312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.970320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.970618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.970798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.970806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.971121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.971458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.971465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.971514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.971692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.971699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.971988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.972160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.972168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.972490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.972670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.972677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.973037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.973372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.973380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.973685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.974002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.974011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.974324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.974631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.974639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.974941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.975269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.975279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.975556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.975890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.975898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.976204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.976496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.976504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.976779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.977095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.977102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.977406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.977710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.977719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.978012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.978298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.978306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.978684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.978946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.978954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.979254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.979530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.979538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.979850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.980181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.980189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.980512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.980653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.980661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.981006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.981324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.981334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.981647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.981939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.981946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.981998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.982283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.982291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.982451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.982745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.982753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.474 qpair failed and we were unable to recover it. 00:26:54.474 [2024-05-15 11:12:50.982939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.474 [2024-05-15 11:12:50.983231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.983239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.983412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.983757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.983765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.984053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.984391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.984398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.984575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.984841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.984848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.985195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.985486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.985493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.985799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.985940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.985948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.986251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.986588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.986596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.986900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.987056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.987063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.987386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.987570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.987578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.987792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.987949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.987957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.988287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.988447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.988454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.988654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.988902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.988909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.989232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.989561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.989570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.989651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.989927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.989934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.990231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.990420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.990427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.990583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.990932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.990940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.991242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.991538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.991549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.991860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.992158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.992166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.992489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.992686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.992693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.992998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.993314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.993322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.993500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.993668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.993675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.993999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.994339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.994347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.994519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.994718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.994724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.995043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.995225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.995233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.995427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.995749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.995757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.996089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.996437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.996444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.996624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.996779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.996786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.997066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.997241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.997249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.997566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.997846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.997854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.998209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.998521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.998529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.475 qpair failed and we were unable to recover it. 00:26:54.475 [2024-05-15 11:12:50.998829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.475 [2024-05-15 11:12:50.999206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:50.999213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:50.999507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:50.999763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:50.999771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.000085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.000425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.000433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.000759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.001090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.001098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.001262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.001573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.001580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.001796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.002079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.002086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.002396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.002686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.002694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.002922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.003234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.003241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.003558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.003836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.003843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.004026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.004343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.004351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.004566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.004743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.004751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.005104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.005277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.005285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.005393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.005693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.005702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.006003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.006261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.006268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.006428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.006628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.006635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.006961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.007013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.007021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.007286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.007592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.007600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.007786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.008060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.008068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.008379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.008688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.008696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.008997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.009289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.009297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.009466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.009783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.009791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.010144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.010458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.010466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.010746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.011043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.011051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.011087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.011391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.011400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.011714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.011902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.011909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.012208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.012528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.012535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.012717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.013007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.013014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.013383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.013561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.013569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.013728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.013916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.013923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.476 qpair failed and we were unable to recover it. 00:26:54.476 [2024-05-15 11:12:51.014138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.476 [2024-05-15 11:12:51.014323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.014330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.014499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.014870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.014878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.015185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.015510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.015517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.015811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.015976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.015983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.016264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.016432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.016439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.016719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.017098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.017105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.017413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.017622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.017630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.017832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.018108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.018116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.018305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.018603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.018611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.018928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.019099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.019106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.019370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.019687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.019694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.020015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.020174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.020182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.020492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.020773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.020781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.021064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.021400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.021408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.021722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.022051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.022058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.022385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.022707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.022714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.022884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.023044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.023052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.023227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.023557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.023564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.023865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.024185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.024193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.024513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.024846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.024854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.025204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.025372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.025379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.025683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.025956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.025963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.026118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.026432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.026441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.026767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.027140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.027148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.027470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.027647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.027655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.027819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.028130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.028136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.028433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.028713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.028720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.029011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.029330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.477 [2024-05-15 11:12:51.029338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.477 qpair failed and we were unable to recover it. 00:26:54.477 [2024-05-15 11:12:51.029665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.029856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.029863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.030016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.030289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.030296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.030462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.030782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.030790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.031101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.031430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.031438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.031616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.031895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.031902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.032204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.032392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.032399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.032705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.033037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.033044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.033348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.033669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.033677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.034000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.034288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.034296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.034490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.034812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.034820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.035116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.035322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.035330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.035624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.035794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.035801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.036096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.036386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.036392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.036713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.037041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.037048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.037214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.037512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.037521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.037894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.038138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.038145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.038313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.038612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.038620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.038941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.039116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.039123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.039286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.039593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.039600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.039923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.039964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.039970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.040156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.040426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.040434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.040756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.041080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.041088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.041399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.041708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.041716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.041869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.042192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.042199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.042524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.042697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.042704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.043004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.043297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.043304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.043615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.043952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.043959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.478 qpair failed and we were unable to recover it. 00:26:54.478 [2024-05-15 11:12:51.044266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.478 [2024-05-15 11:12:51.044594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.044602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.044765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.045047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.045055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.045241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.045556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.045563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.045897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.046235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.046244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.046542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.046853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.046861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.047030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.047348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.047355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.047561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.047924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.047931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.048190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.048479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.048486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.048636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.048926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.048933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.049241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.049570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.049578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.049889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.050049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.050055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.050320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.050484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.050492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.050739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.051029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.051037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.051373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.051685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.051694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.052018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.052334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.052342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.052647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.052967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.052975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.053294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.053621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.053629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.053786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.054106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.054114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.054418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.054709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.054716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.055041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.055360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.055367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.055676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.056009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.056016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.056312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.056460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.056468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.056776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.057116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.057124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.057299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.057506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.057516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.057685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.057879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.057887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.058039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.058318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.058326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.058637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.058848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.058855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.059153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.059471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.059479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.059790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.060129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.060136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.060417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.060709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.060717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.479 qpair failed and we were unable to recover it. 00:26:54.479 [2024-05-15 11:12:51.061044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.479 [2024-05-15 11:12:51.061380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.061387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.061683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.061859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.061867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.062182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.062372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.062380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.062677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.063000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.063009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.063314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.063485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.063492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.063688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.063850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.063857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.064145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.064358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.064365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.064693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.065029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.065037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.065342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.065658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.065666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.065972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.066295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.066303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.066472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.066771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.066778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.067085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.067412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.067419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.067584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.067854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.067862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.068043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.068331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.068338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.068648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.068966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.068973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.069262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.069593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.069600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.069903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.070192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.070199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.070360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.070642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.070649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.070966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.071134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.071141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.071425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.071709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.071717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.071891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.072126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.072133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.072432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.072754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.072762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.072938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.073123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.073130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.073281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.073586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.073594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.073883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.074138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.074145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.074448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.074749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.074756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.075066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.075367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.075375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.075703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.075880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.075887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.076071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.076290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.076297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.076570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.076846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.480 [2024-05-15 11:12:51.076853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.480 qpair failed and we were unable to recover it. 00:26:54.480 [2024-05-15 11:12:51.077029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.077325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.077332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.077627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.077966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.077974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.078284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.078453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.078460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.078607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.078822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.078830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.079010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.079118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.079125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.079318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.079600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.079619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.079904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.080219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.080226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.080540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.080842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.080850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.081064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.081401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.081409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.081585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.081877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.081885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.082192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.082509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.082516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.082799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.083052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.083059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.083374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.083693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.083701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.083970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.084234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.084241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.084543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.084837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.084845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.085225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.085512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.085520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.085704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.085889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.085897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.086227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.086420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.086427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.086716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.087022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.087029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.087388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.087584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.087591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.087975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.088240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.088247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.088458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.088798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.088805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.089117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.089447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.089454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.089784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.089962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.089969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.090304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.090631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.090639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.090966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.091249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.091257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.091298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.091509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.091517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.091691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.091912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.091920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.092218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.092506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.092514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.481 [2024-05-15 11:12:51.092685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.092870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.481 [2024-05-15 11:12:51.092878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.481 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.093184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.093324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.093332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.093612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.093825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.093833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.094138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.094201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.094210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.094386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.094690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.094698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.094878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.095213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.095221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.095407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.095552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.095563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.095662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.095854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.095860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.096152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.096311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.096318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.096503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.096823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.096831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.097133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.097471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.097478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.097842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.098182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.098190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.098480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.098799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.098806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.099111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.099430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.099436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.099758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.100090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.100097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.100395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.100711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.100719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.101087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.101372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.101379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.101700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.101934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.101941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.102265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.102591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.102599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.102910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.103226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.103233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.103477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.103729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.103737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.104043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.104271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.104278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.104570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.104839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.104846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.105161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.105500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.105507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.105678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.105854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.482 [2024-05-15 11:12:51.105862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.482 qpair failed and we were unable to recover it. 00:26:54.482 [2024-05-15 11:12:51.106168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.106429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.106438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.106767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.107083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.107090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.107346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.107566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.107573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.107845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.108132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.108138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.108419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.108622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.108629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.108852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.109113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.109120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.109435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.109762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.109770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.110062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.110379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.110387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.110691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.111019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.111026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.111195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.111488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.111496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.111845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.112171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.112179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.112296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.112618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.112625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.112930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.113102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.113110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.113377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.113694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.113701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.114006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.114326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.114334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.114641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.114821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.114828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.115109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.115433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.115440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.115798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.116085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.116092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.116268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.116593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.116601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.116903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.117219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.117226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.117421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.117646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.117655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.117982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.118288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.118295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.118600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.118769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.118776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.119083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.119397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.119404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.119710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.120043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.120050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.120363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.120680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.120687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.121006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.121326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.121333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.121627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.121774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.121781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.122083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.122261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.122268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.122561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.122748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.122755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.123035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.123236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.123243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.123413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.123684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.123691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.124020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.124337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.124344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.124667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.124832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.124839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.125117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.125442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.125449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.125747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.126061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.126068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.126365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.126685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.126692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.126867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.127018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.127025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.127185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.127480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.127488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.127713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.128039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.128046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.128323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.128651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.128659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.128829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.129179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.129186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.129490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.129661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.129669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.129936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.130266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.130273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.130569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.130853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.130861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.131159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.131304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.131311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.131579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.131854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.131861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.132159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.132328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.132335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.132649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.132898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.132906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.133230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.133555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-05-15 11:12:51.133563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-05-15 11:12:51.133733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.133924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.133933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.134254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.134577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.134584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.134949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.135032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.135038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.135313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.135477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.135485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.135707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.135872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.135879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.136237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.136460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.136468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.136652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.136950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.136958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.137270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.137569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.137576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.137894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.137990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.137997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.138331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.138593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.138600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.138781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.139075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.139084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.139383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.139638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.139645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.140035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.140178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.140185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.140515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.140847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.140855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.141006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.141300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.141308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.141653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.141980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.141987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.142174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.142359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.142367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.142561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.142879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.142886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.143193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.143530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.143538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.143849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.144171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.144178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.144482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.144741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.144751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.145053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.145238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.145245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.145427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.145718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.145726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.146056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.146381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.146389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.146701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.147014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.147022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.147351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.147650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.147659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.147975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.148239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.148247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.148428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.148614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.148622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.148921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.149237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.149244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.149537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.149830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.149837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.150032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.150365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.150374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.150536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.150733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.150740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.150919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.151277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.151285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.151600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.151932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.151940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.152243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.152570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.152578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.152887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.153059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.153067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.153364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.153688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.153696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.153989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.154289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.154295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.154677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.154998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.155005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.155200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.155360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.155367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.155543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.155844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.155852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.156005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.156309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.156316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.156627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.156907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.156914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.157083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.157433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.157440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.157739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.158071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.158079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.158248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.158285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.158290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.158603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.158904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.158911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.159257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.159548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.159556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.159856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.160177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.160185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.160517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.160668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.160676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.160937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.161312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.161320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.161616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.161799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.161807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.162073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.162349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.162356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.162726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.162762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.162768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.163133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.163466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.163474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.163859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.164002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.164009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.164310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.164629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-05-15 11:12:51.164638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-05-15 11:12:51.164963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.165152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.165159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.165474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.165640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.165647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.165916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.166226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.166233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.166541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.166756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.166763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.166942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.166978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.166983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.167296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.167608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.167616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.167935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.168229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.168236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.168561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.168888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.168895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.169198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.169514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.169521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.169802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.170121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.170128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.170433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.170755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.170762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.171074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.171391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.171398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.171688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.172013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.172020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.172405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.172687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.172695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.173012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.173293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.173301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.173614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.173789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.173796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.174087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.174407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.174415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.174575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.174852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.174861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.175133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.175422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.175430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.175595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.175880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.175888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.176179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.176345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.176353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.176620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.176813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.176821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.177003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.177271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.177279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.177413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.177704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.177711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.178024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.178340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.178348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.178613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.178944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.178951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.179126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.179414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.179421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.179575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.179897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.179904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.180228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.180558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.180566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.180870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.181190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.181198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.181518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.181852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.181860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.182204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.182341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.182347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.182560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.182600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.182607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.182791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.183086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.183094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.183403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.183756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.183764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.183924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.184079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.184086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.184371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.184590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.184598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.184794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.184996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.185003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.185311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.185644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.185651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.185956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.186273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.186280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.186574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.186880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.186887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.187053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.187360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.187367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.187690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.187838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.187845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.188127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.188447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.188454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.188776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.188946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.188954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.189217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.189534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.189541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.189704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.189914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.189921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.190245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.190560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.190568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.190832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.191113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.191121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-05-15 11:12:51.191445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-05-15 11:12:51.191776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.191783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.191977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.192295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.192303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.192338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.192655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.192662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.192937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.193110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.193117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.193416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.193743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.193750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.194064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.194364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.194371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.194558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.194748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.194754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.195053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.195372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.195380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.195548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.195844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.195851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.196167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.196483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.196490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.196807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.196983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.196990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.197293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.197617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.197625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.197938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.198271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.198279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.198595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.198768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.198775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.199076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.199411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.199418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.199814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.200111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.200118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.200431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.200721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.200728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.200951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.201266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.201273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.201602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.201955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.201963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.202301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.202634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.202642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.202814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.203007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.203015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.203290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.203611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.203619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.203923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.204241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.204248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.204410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.204591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.204598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.204786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.205121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.205129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.205434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.205759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.205766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.206080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.206397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.206404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.206588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.206789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.206796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.207081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.207230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.207237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.207427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.207710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.207717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.207780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.208129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.208136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.208463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.208788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.208796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.209102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.209437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.209444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.209600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.209879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.209886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.210199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.210368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.210375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.210713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.211042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.211049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.211218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.211492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.211500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.211813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.211988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.211995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.212147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.212320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.212328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.212709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.212998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.213006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.213312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.213633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.213641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.213986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.214304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.214312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.214451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.214819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.214826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.214993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.215321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.215328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.215656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.215960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.215967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.216270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.216587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.216595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.216897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.217185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.217192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.217515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.217814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.217822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.218127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.218412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.218420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.218735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.219027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.219034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.219207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.219505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.219513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.219874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.220170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.220178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.220496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.220682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.220690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.221019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.221351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.221358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.221551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.221847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.221855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.222172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.222476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.222485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.222766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.223072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.223079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-05-15 11:12:51.223358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-05-15 11:12:51.223542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.223558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.223852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.224045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.224052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.224383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.224528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.224534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.224821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.225138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.225145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.225455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.225636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.225644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.225969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.226282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.226290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.226591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.226791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.226798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.227084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.227364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.227372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.227668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.228034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.228042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.228331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.228651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.228659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.228997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.229070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.229075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.229150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.229305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.229311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.229524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.229710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.229717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.229890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.230176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.230183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.230508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.230770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.230778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.230993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.231251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.231260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.231565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.231851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.231859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.232154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.232523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.232530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.232742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.233078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.233087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.233259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.233592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.233600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.233928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.234261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.234268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.234310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.234588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.234596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.234923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.235250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.235257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.235385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.235674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.235683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.235987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.236323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.236330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.236656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.237001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.237008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.237348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.237645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.237653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.237966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.238306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.238314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.238612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.238712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.238722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.239014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.239337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.239345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.239684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.239997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.240004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.240211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.240392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.240399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.240710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.240898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.240905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.241204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.241375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.241381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.241751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.242072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.242079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.242367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.242727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.242735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.243043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.243362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.243370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.243697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.244019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.244027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.244289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.244610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.244618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.244944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.245229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.245236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.245573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.245948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.245955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.246251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.246427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.246436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.246619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.246801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.246808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.246973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.247252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.247260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.247568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.247850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.247857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.248062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.248419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.248426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.248717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.249045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.249052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.249374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.249553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.249561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.249785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.250104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.250111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.250418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.250630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.250638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.250972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.251297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.251305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-05-15 11:12:51.251622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.251923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-05-15 11:12:51.251930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.252100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.252437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.252445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.252737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.253074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.253081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.253374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.253651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.253658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.253968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.254144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.254151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.254215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.254496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.254504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.254825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.255142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.255149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.255474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.255804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.255811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.256124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.256442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.256449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.256619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.256920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.256927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.257235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.257440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.257448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.257761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.257924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.257931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.258208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.258357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.258364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.258668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.258850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.258858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.259056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.259378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.259386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.259543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.259815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.259822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.259966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.260239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.260247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.260569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.260895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.260903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.261061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.261216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.261222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.261513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.261808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.261816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.262100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.262427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.262435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.262712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.262926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.262933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.263249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.263423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.263429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.263711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.264030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.264038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.264354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.264660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.264668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.264854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.265030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.265038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.265216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.265539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.265551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.265752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.266046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.266053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.266370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.266670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.266678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.266977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.267313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.267321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.267471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.267746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.267754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.268075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.268235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.268243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.268413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.268691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.268699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.268911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.269122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.269129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.269449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.269777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.269786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.269956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.270312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.270320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.270614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.270956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.270963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.271259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.271596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.271603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.271916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.272107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.272114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.272388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.272692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.272699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.272851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.273141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.273148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.273348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.273666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.273673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.273939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.274263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.274272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.274655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.274952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.274960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.275167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.275349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.275356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.275431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.275714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.275722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.275901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.276050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.276057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.276347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.276687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.276695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.277020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.277345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.277352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.277671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.277889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.277898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.278073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.278396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.278405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.278718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.279056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.279064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.279398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.279558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.279566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.279861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.280246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.280253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.280436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.280500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.280508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.280659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.280971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.280978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.281287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.281582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.281590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-05-15 11:12:51.281892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-05-15 11:12:51.282216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.282225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.282433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.282655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.282662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.282813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.283108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.283116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.283466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.283758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.283766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.284089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.284426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.284433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.284648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.284929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.284936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.284978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.285270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.285277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.285585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.285893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.285901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.286206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.286502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.286510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.286740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.286926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.286934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.287162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.287310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.287317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.287473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.287714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.287721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.288029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.288359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.288367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.288636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.288976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.288984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.289161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.289320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.289330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.289400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.289612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.289620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.289805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.290108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.290115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.290432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.290632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.290639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.290956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.291251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.291258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.291558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.291846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.291853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.292188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.292526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.292534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.292866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.293185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.293193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.293478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.293631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.293641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.293935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.294262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.294270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.294624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.294943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.294951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.295263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.295452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.295459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.295643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.295946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.295953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.296122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.296499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.296507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.296821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.297141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.297148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.297422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.297611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.297618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.297779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.298129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.298136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.298318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.298625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.298632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.298795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.299115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.299122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.299282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.299455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.299465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.299754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.300050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.300057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.300373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.300693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.300702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.300892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.301098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.301106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.301393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.301448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.301455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.301654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.302008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.302016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.302391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.302697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.302705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.303044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.303394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.303401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.303681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.303991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.304000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.304170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.304488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.304495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.304824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.304999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.305007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.305291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.305608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.305616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.305919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.306252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.306259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.306584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.306862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.306869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.307025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.307205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.307213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.307488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.307769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.307778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.308086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.308263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.308270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.308565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.308845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.308853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.309138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.309323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.309332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.309657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.309995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.310002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.310287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.310575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.310583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.310888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.311216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.311228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.311538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.311755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.311762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.312085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.312402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.312410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-05-15 11:12:51.312760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-05-15 11:12:51.313077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.313084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.313385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.313549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.313557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.313717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.313943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.313950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.314254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.314448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.314456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.314770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.315080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.315089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.315270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.315428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.315435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.315718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.316016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.316024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.316313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.316497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.316504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.316780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.317100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.317108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.317452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.317630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.317638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.317945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.318207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.318215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.318537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.318876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.318886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.319054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.319368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.319375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.319533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.319824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.319831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.320178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.320467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.320476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.320730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.320908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.320916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.320957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.321221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.321228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.321437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.321634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.321641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.321812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.322105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.322113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.322431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.322759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.322766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.322963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.323239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.323246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.323425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.323625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.323632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.323942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.324121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.324128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.324296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.324604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.324611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.324959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.325252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.325259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.325556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.325732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.325740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.325883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.325945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.325953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.326247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.326407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.326414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.326716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.327049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.327057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.327358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.327525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.327532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.327826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.328137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.328144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.328455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.328711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.328718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.329048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.329346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.329354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.329654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.329986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.329993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.330281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.330606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.330613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.330933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.331281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.331289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.331590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.331861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.331869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.332184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.332502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.332509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.332789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.333104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.333111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.333313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.333588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.333596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.333777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.334112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.334119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.334414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.334586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.334593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.334910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.335245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.335252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.335553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.335711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.335718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.336042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.336219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.336225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.336525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.336660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.336667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.336998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.337288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.337295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.337599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.337872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.337880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.338227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.338517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.338525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.338701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.338917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.338924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.339256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.339567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.339574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-05-15 11:12:51.339738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.339999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-05-15 11:12:51.340006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.340187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.340369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.340377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.340679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.341001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.341008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.341170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.341399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.341407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.341708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.341894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.341902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.342248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.342432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.342439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.342751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.343079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.343088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.343242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.343411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.343419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.343693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.344004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.344011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.344319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.344641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.344649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.344973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.345293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.345300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.345605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.345941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.345949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.346262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.346554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.346562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.346753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.346957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.346964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.347151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.347450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.347458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.347621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.347930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.347936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.348106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.348413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.348421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.348731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.349024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.349032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.349243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.349503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.349510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.349826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.350010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.350017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.350321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.350645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.350653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.350973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.351290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.351297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.351625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.351814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.351821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.352144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.352467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.352474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.352791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.353135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.353142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.353455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.353795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.353803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.354146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.354492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.354499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.354808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.355136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.355144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.355469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.355800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.355807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.356056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.356374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.356382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.356717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.357037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.357045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.357354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.357690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.357698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.357966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.358133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.358140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.358471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.358790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.358798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.359092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.359411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.359418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.359710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.360007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.360015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.360325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.360512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.360518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.360783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.360824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.360830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.361150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.361319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.361327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.361637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.361815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.361822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.362005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.362184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.362192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.362531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.362755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.362762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.363088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.363377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.363385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.363523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.363804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.363812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.364132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.364452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.364461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.364788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.365126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.365134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.365446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.365746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.365754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.365940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.366246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.366253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.366557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.366711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.366719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.367034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.367171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.367179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.367484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.367653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.367661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.367923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.368214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.368222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.368415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.368715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.368723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.369024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.369339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.369347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.369641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.369680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.369687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.369963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.370278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.370285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.370601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.370927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.370935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-05-15 11:12:51.371246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.371562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-05-15 11:12:51.371570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.371755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.372084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.372092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.372396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.372713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.372721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.372875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.373148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.373156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.373335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.373670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.373678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.373910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.374188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.374196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.374469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.374769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.374777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.375051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.375367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.375376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.375686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.376002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.376010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.376307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.376637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.376645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.376909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.377223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.377230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.377531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.377797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.377804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.378162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.378450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.378458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.378639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.378926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.378934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.378975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.379344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.379352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.379657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.379944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.379951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.380309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.380477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.380484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.380880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.381240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.381248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.381430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.381721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.381729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.382015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.382282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.382289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.382583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.382923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.382931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.383256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.383593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.383600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.383761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.383920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.383926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.384251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.384555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.384563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.384844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.385038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.385045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.385333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.385619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.385627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.385923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.386258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.386266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.386555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.386853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.386861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.387178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.387480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.387488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.387798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.388143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.388150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.388469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.388756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.388765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.389074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.389358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.389365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.389560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.389856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.389865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.390180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.390483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.390491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.390577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.390856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.390863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.391198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.391386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.391392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.391572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.391844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.391852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.392041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.392218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.392226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.392536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.392793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.392800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.393114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.393276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.393283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.393541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.393824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.393832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.394144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.394441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.394448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.394748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.394945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.394952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-05-15 11:12:51.395192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.395344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-05-15 11:12:51.395352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:55.038 [2024-05-15 11:12:51.395716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-05-15 11:12:51.395909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-05-15 11:12:51.395917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-05-15 11:12:51.396258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-05-15 11:12:51.396444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-05-15 11:12:51.396452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.396739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.396926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.396933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.397162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.397440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.397450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.397755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.397969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.397976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.398298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.398615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.398623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.398936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.399212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.399220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.399409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.399717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.399724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.399892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.400182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.400191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.400357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.400552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.400560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.400887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.401191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.401198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.401372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.401706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.401714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.402029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.402307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.402315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.402603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.402869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.402878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.403198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.403554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.403563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.403851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.404137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.404145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.404459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.404755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.404762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.405054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.405397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.405404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.405720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.406030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.406038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.406364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.406646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.406655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.406840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.407149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.407157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.407487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.407808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.407816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.408123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.408443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.408451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.408778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.408917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.408928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.409228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.409402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.409409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.409688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.409887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.409894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.410202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.410371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.410377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.410654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.410923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.410932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.411239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.411450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.411456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.411768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.412088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-05-15 11:12:51.412095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-05-15 11:12:51.412248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.412519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.412527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.412757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.412905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.412911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.413072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.413416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.413422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.413724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.414064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.414070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.414391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.414687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.414694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.415030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.415353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.415360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.415655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.415749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.415755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.416053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.416344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.416350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.416544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.416765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.416772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.417084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.417279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.417285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.417620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.417928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.417934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.418256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.418444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.418451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.418704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.418919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.418926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.419084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.419385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.419392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.419825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.420104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.420111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.420324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.420466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.420472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.420687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.420957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.420963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.421262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.421561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.421573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.421876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.422174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.422180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.422493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.422551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.422557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.422731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.423112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.423118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.423375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.423679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.423686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-05-15 11:12:51.423910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.424239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-05-15 11:12:51.424246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.424571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.424856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.424862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.425149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.425453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.425459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.425845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.426200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.426206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.426522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.426682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.426689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.426878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.427027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.427032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.427315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.427615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.427622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.427816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.428108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.428115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.428156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.428515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.428521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.428829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.428997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.429004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.429273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.429627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.429634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.429849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.430124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.430131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.430318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.430621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.430628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.430837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.431148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.431156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.431389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.431728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.431735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.432109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.432261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.432267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.432441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.432749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.432756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.433055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.433383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.433389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.433677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.433984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.433990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.434051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.434209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.434215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.434383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.434554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.434561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.434755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.435009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.435016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.435326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.435616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.435624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.435961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.436285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.436292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.436615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.436796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.436802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.437095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.437421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.437427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.437746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.438039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.438046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.438361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.438526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.438533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.438724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.438941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.438947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.439118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.439331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.439338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-05-15 11:12:51.439517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-05-15 11:12:51.439797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.439804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.439989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.440327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.440334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.440532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.440862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.440869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.441032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.441339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.441346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.441656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.441838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.441845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.442006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.442196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.442202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.442470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.442742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.442749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.443047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.443350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.443356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.443653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.443962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.443969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.444364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.444704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.444711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.444912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.445090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.445096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.445396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.445787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.445794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.446074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.446357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.446363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.446645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.446834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.446840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.447100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.447402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.447408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.447755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.448089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.448095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.448379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.448703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.448710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.449019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.449163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.449170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.449345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.449606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.449613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.449805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.450111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.450117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.450437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.450757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.450763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.451050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.451380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.451386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.451558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.451850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.451857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.452026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.452360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.452366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.452680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.452962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.452968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.453288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.453590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.453597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.042 [2024-05-15 11:12:51.453730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.453986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.042 [2024-05-15 11:12:51.453992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.042 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.454289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.454466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.454472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.454671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.454980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.454987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.455304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.455672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.455679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.455718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.456045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.456053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.456413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.456715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.456722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.457041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.457357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.457364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.457676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.457984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.457991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.458156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.458468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.458475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.458780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.459108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.459115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.459316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.459507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.459514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.459829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.460148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.460155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.460476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.460760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.460767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.461073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.461400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.461407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.461716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.461908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.461915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.462230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.462440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.462446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.462620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.462812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.462818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.463067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.463296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.463302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.463456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.463780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.463786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.043 qpair failed and we were unable to recover it. 00:26:55.043 [2024-05-15 11:12:51.464058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.043 [2024-05-15 11:12:51.464391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.464397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.464712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.465047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.465053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.465342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.465660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.465667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.465991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.466290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.466296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.466614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.466930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.466936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.467111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.467461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.467467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.467635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.467894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.467900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.468230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.468401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.468408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.468694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.468999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.469006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.469312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.469625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.469632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.469935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.470151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.470157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.470466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.470791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.470798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.471206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.471517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.471525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.471816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.472020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.472027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.472328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.472401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.472407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.472632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.472942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.472949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.473276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.473597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.473604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.473748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.473815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.473822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.474108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.474431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.474438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.474658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.474902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.474909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.475234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.475562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.475569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.475870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.476051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.476057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.476363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.476664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.476671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.476856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.477141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.477148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.477324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.477562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.477568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.477715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.477987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.477993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.478162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.478431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.478438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.478757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.478950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.478958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.479276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.479598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.479605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.044 [2024-05-15 11:12:51.479913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.480232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.044 [2024-05-15 11:12:51.480239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.044 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.480534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.480849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.480856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.481015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.481296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.481302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.481610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.481796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.481802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.482072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.482400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.482406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.482704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.483039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.483045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.483365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.483679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.483685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.483838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.484001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.484007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.484299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.484613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.484622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.484780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.484932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.484938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.485274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.485578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.485585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.485896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.486200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.486206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.486378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.486655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.486662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.486933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.487125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.487137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.487285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.487565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.487573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.487614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.487800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.487806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.488129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.488166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.488172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.488332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.488654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.488661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.489001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.489334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.489343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.489385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.489689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.489695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.489858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.490023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.490029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.490207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.490591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.490598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.490974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.491319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.045 [2024-05-15 11:12:51.491324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.045 qpair failed and we were unable to recover it. 00:26:55.045 [2024-05-15 11:12:51.491485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.491767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.491779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.491954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.492141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.492148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.492375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.492701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.492708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.493026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.493344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.493350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.493641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.493927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.493933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.494114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.494413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.494421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.494812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.495107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.495114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.495429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.495682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.495688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.496071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.496363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.496371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.496707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.496887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.496894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.497212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.497366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.497372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.497646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.497879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.497885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.497923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.498182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.498189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.498573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.498848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.498855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.499037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.499226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.499232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.499540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.499610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.499616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.499832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.500145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.500151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.500325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.500363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.500369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.500722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.500909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.500915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.501177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.501502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.501508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.501693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.501904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.501911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.502195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.502370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.502376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.502695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.503036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.503043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.503331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.503470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.503476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.503706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.504016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.504023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.504326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.504509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.504516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.504825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.505094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.505102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.505430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.505619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.505626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.505811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.506198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.506205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.046 qpair failed and we were unable to recover it. 00:26:55.046 [2024-05-15 11:12:51.506385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.506702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.046 [2024-05-15 11:12:51.506709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.506926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.507209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.507216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.507516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.507854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.507861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.508185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.508368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.508374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.508554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.508852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.508858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.509155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.509447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.509454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.509785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.510075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.510081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.510399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.510695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.510702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.510970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.511270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.511276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.511592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.511822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.511828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.512107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.512397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.512405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.512717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.512895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.512901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.513203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.513527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.513534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.513662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.513966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.513973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.514291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.514612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.514620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.514814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.515145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.515152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.515328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.515690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.515697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.515883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.516224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.516231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.516553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.516723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.516730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.517026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.517350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.517357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.517689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.517850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.517856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.518163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.518516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.518523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.518893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.519054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.519060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.519343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.519384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.519390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.519688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.519998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.520005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.520316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.520461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.520468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.047 qpair failed and we were unable to recover it. 00:26:55.047 [2024-05-15 11:12:51.520792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.520978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.047 [2024-05-15 11:12:51.520985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.521278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.521430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.521437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.521722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.522062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.522070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.522248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.522393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.522399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.522606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.522892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.522899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.523195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.523479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.523486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.523823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.524153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.524160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.524322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.524558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.524565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.524875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.525199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.525205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.525429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.525592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.525599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.525816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.526142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.526155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.526368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.526671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.526678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.527010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.527280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.527286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.527603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.527983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.527990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.528276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.528596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.528604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.528909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.529210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.529216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.529529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.529823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.529830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.530142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.530462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.530469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.530784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.531108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.531114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.531401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.531705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.531712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.048 qpair failed and we were unable to recover it. 00:26:55.048 [2024-05-15 11:12:51.532058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.532351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.048 [2024-05-15 11:12:51.532357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.532680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.532870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.532877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.533188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.533482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.533488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.533826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.534131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.534138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.534337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.534671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.534679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.535020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.535332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.535347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.535528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.535914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.535920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.536087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.536321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.536328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.536520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.536832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.536839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.537153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.537339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.537345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.537526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.537851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.537858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.538160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.538372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.538380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.538698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.538960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.538968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.539278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.539599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.539606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.539883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.540182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.540190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.540505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.540828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.540836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.541156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.541195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.541201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.541478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.541805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.541812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.542161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.542553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.542561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.542754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.542954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.542961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.543353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.543521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.543527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.543921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.544210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.544216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.544521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.544685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.544692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.545008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.545338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.545345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.545655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.545840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.545847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.545884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.546175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.546181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.546348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.546631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.546638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.546975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.547269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.547275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.547444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.547603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.547610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.547834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.548028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.049 [2024-05-15 11:12:51.548034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.049 qpair failed and we were unable to recover it. 00:26:55.049 [2024-05-15 11:12:51.548337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.548513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.548520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.548844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.549163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.549169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.549478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.549800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.549806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.550133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.550274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.550281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.550615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.550778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.550783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.550939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.551277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.551284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.551478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.551781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.551787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.552078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.552396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.552402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.552709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.552973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.552979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.553275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.553606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.553613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.553886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.554103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.554109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.554470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.554776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.554783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.555061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.555258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.555264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.555458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.555761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.555767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.556097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.556305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.556312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.556373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.556535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.556542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.556879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.557087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.557094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.557396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.557732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.557738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.558027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.558335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.558343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.558655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.558959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.558966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.559182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.559483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.559490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.559805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.559970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.559979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.560153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.560387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.560394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.560700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.560897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.560904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.560956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.561207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.561214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.561376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.561639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.561645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.561873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.561967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.561975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.562174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.562373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.562380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.050 qpair failed and we were unable to recover it. 00:26:55.050 [2024-05-15 11:12:51.562725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.050 [2024-05-15 11:12:51.563012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.563018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.563223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.563565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.563571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.563717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.564046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.564052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.564388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.564714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.564722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.565044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.565247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.565253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.565409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.565561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.565567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.565859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.566125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.566132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.566319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.566700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.566707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.567011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.567297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.567303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.567612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.567888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.567895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.568261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.568461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.568467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.568684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.568863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.568869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.569154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.569340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.569346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.569501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.569951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.569961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.570356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.570523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.570529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.570904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.571069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.571075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.571241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.571450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.571456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.571663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.571955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.571961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.572146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.572501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.572508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.572895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.573234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.573241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.573282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.573460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.573467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.573786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.573982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.573989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.574165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.574399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.574407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.574583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.574892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.574900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.575236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.575408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.575415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.575683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.575868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.575875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.576152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.576506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.576513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.576825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.577143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.577149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.577434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.577605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.577612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.577909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.578204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.051 [2024-05-15 11:12:51.578210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.051 qpair failed and we were unable to recover it. 00:26:55.051 [2024-05-15 11:12:51.578526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.578748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.578754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.578930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.579335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.579341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.579659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.579970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.579976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.580280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.580516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.580522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.580796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.581133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.581139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.581199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.581351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.581357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.581669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.581877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.581883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.582170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.582464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.582471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.582774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.583072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.583079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.583146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 11:12:51 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:55.052 [2024-05-15 11:12:51.583499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.583506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 11:12:51 -- common/autotest_common.sh@860 -- # return 0 00:26:55.052 [2024-05-15 11:12:51.583865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 11:12:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:55.052 [2024-05-15 11:12:51.584054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.584062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 11:12:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.052 [2024-05-15 11:12:51.584363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:26:55.052 [2024-05-15 11:12:51.584521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.584528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.584729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.585030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.585037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.585235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.585417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.585423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.585713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.585904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.585910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.586198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.586373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.586379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.586812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.586994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.587000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.587322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.587576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.587584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.587897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.588075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.588081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.588328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.588619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.588626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.588905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.589180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.589187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.589499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.589721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.589728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.590068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.590378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.590385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.590692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.591023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.591029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.591357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.591629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.591636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.591976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.592303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.592310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.592485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.592814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.592820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.593120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.593304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.593310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.052 qpair failed and we were unable to recover it. 00:26:55.052 [2024-05-15 11:12:51.593611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.052 [2024-05-15 11:12:51.593924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.593931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.594103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.594428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.594435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.594733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.595064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.595071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.595370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.595692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.595699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.596002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.596180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.596186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.596374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.596657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.596664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.596886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.597074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.597081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.597423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.597731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.597738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.597916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.598197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.598203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.598497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.598681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.598687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.053 [2024-05-15 11:12:51.598850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.599126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.053 [2024-05-15 11:12:51.599134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.053 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.599432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.599772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.599780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.600079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.600381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.600387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.600558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.600845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.600852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.601213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.601508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.601515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.601695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.602034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.602041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.602326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.602691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.602698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.602884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.603192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.603199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.603484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.603784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.603791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.604087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.604297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.604303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.604609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.604802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.604809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.604918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.605193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.054 [2024-05-15 11:12:51.605200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.054 qpair failed and we were unable to recover it. 00:26:55.054 [2024-05-15 11:12:51.605506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.605815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.605822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.606129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.606441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.606448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.606737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.607070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.607077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.607395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.607714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.607721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.607928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.608238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.608244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.608539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.608851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.608858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.609172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.609486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.609492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.609800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.610092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.610099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.610254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.610497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.610505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.610818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.611106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.611112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.611304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.611654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.611661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.611967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.612138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.612145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.612441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.612731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.612739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.612857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.612997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.613004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.613182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.613558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.613565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.613727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.614024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.614030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.614349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.614510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.614517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.614811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.615140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.615147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.615487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.615745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.615752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.615792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.615977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.615983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.616204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.616334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.616341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.616652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.616990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.616996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.617279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.617453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.617460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.617617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.618004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.618012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.618184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.618455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.618461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.618669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.619021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.619027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.619312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.619485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.619492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.619803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.619981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.619988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.620261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.620438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.620445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.055 qpair failed and we were unable to recover it. 00:26:55.055 [2024-05-15 11:12:51.620826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.621117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.055 [2024-05-15 11:12:51.621124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.621303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.621594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.621602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.621915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.622208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.622214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.622500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 11:12:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.056 [2024-05-15 11:12:51.622832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.622840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 11:12:51 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.056 [2024-05-15 11:12:51.623134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.623431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 11:12:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.056 [2024-05-15 11:12:51.623438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:26:55.056 [2024-05-15 11:12:51.623716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.623752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.623758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.624086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.624408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.624415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.624708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.624963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.624969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.625255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.625416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.625422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.625602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.625975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.625982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.626300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.626607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.626614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.626817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.626976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.626983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.627282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.627600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.627608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.628022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.628319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.628326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.628645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.628870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.628877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.629155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.629384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.629390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.629535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.629844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.629851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.630193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.630540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.630550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.630852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.631169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.631175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.631386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.631723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.631729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.632032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.632097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.632110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.632372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.632540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.632551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.632782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.633078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.633085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.633483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.633819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.633826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.633997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.634271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.634279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.634470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.634899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.634905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.635205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.635372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.635379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.635702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.636041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.636048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.636390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.636649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.636656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.056 qpair failed and we were unable to recover it. 00:26:55.056 [2024-05-15 11:12:51.636962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.056 [2024-05-15 11:12:51.637149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.637155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.637476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.637817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.637823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.638121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.638294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.638300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 Malloc0 00:26:55.057 [2024-05-15 11:12:51.638582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.638883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.638889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.639178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 11:12:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.057 [2024-05-15 11:12:51.639356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.639362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.639540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 11:12:51 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:55.057 [2024-05-15 11:12:51.639839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.639845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 11:12:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.057 [2024-05-15 11:12:51.640153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:26:55.057 [2024-05-15 11:12:51.640341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.640348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.640642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.640964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.640970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.641287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.641594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.641600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.641780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.641989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.641995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.642177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.642327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.642333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.642674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.642956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.642962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.643277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.643468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.643475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.643790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.643961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.643968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.644150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.644522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.644530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.644834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.645010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.645017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.645288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.645472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.645479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.645789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.645979] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.057 [2024-05-15 11:12:51.646085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.646092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.646416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.646719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.646725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.647028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.647323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.647329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.647624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.647839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.647845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.648157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.648444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.648450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.648749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.648959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.648965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.649266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.649452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.649458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.649663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.650009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.650016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.650229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.650527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.650534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.650677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.651000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.651006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.057 [2024-05-15 11:12:51.651409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.651713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.057 [2024-05-15 11:12:51.651720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.057 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.651848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.652117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.652123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.652413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.652720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.652727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.652906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.653247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.653253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.653559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.653861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.653867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.654186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.654477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.654484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.654641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 11:12:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.058 [2024-05-15 11:12:51.655019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.655026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.655195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 11:12:51 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.058 [2024-05-15 11:12:51.655479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.655485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 11:12:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.058 [2024-05-15 11:12:51.655679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:26:55.058 [2024-05-15 11:12:51.656003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.656010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.656182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.656487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.656493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.656806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.657004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.657010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.657327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.657625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.657631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.657948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.658133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.658140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.658467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.658651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.658657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.658947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.659252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.659258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.659557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.659851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.659858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.660253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.660566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.660576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.660785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.661059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.661065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.661246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.661416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.661422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.661647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.661995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.662001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.662412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.662755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.662762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.663079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.663406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.663412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.663723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.664034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.664041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.664360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.664634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.664640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.664816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.665093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.665099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.665325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.665711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.665717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.666020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.666092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.666099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.666321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.666629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 [2024-05-15 11:12:51.666636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.058 qpair failed and we were unable to recover it. 00:26:55.058 [2024-05-15 11:12:51.666993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.058 11:12:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.058 [2024-05-15 11:12:51.667302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.667309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 11:12:51 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.059 [2024-05-15 11:12:51.667606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 11:12:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.059 [2024-05-15 11:12:51.667806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.667812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:26:55.059 [2024-05-15 11:12:51.668183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.668448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.668454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.668531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.668830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.668837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.669134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.669315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.669321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.669605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.669881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.669887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.670105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.670292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.670299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.670609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.670967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.670973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.671261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.671553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.671560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.671756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.671935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.671941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.672165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.672518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.672525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.672915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.673185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.673191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.673498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.673659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.673666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.673947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.674208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.674214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.674395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.674665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.674671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.674965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.675193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.675199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.675496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.675710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.675716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.675915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.676198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.676203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.676518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.676666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.676673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.676886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.677037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.677044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.059 [2024-05-15 11:12:51.677211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.677557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.059 [2024-05-15 11:12:51.677563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.059 qpair failed and we were unable to recover it. 00:26:55.322 [2024-05-15 11:12:51.677914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-05-15 11:12:51.678252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-05-15 11:12:51.678260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-05-15 11:12:51.678432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-05-15 11:12:51.678619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-05-15 11:12:51.678626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-05-15 11:12:51.678946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 11:12:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.322 [2024-05-15 11:12:51.679291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-05-15 11:12:51.679298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 11:12:51 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.323 [2024-05-15 11:12:51.679585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 11:12:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.323 [2024-05-15 11:12:51.679898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.679905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:26:55.323 [2024-05-15 11:12:51.680287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.680520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.680527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.680815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.681119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.681133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.681293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.681643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.681650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.681982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.682151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.682157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.682480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.682714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.682721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.683040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.683364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.683370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.683702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.684030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.684036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.684335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.684506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.684512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.684799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.685079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.685085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.685402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.685667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.685673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.685843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.686040] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:55.323 [2024-05-15 11:12:51.686118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-05-15 11:12:51.686125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6968000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.686265] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.323 11:12:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.323 11:12:51 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:55.323 11:12:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.323 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:26:55.323 [2024-05-15 11:12:51.696962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.323 [2024-05-15 11:12:51.697027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.323 [2024-05-15 11:12:51.697041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.323 [2024-05-15 11:12:51.697046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.323 [2024-05-15 11:12:51.697051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.323 [2024-05-15 11:12:51.697066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 11:12:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.323 11:12:51 -- host/target_disconnect.sh@58 -- # wait 509221 00:26:55.323 [2024-05-15 11:12:51.706808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.323 [2024-05-15 11:12:51.706859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.323 [2024-05-15 11:12:51.706871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.323 [2024-05-15 11:12:51.706876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.323 [2024-05-15 11:12:51.706880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.323 [2024-05-15 11:12:51.706890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.716853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.323 [2024-05-15 11:12:51.716902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.323 [2024-05-15 11:12:51.716913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.323 [2024-05-15 11:12:51.716918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.323 [2024-05-15 11:12:51.716922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.323 [2024-05-15 11:12:51.716933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.726760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.323 [2024-05-15 11:12:51.726810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.323 [2024-05-15 11:12:51.726821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.323 [2024-05-15 11:12:51.726826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.323 [2024-05-15 11:12:51.726830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.323 [2024-05-15 11:12:51.726840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.736786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.323 [2024-05-15 11:12:51.736839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.323 [2024-05-15 11:12:51.736852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.323 [2024-05-15 11:12:51.736857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.323 [2024-05-15 11:12:51.736862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.323 [2024-05-15 11:12:51.736872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.746715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.323 [2024-05-15 11:12:51.746762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.323 [2024-05-15 11:12:51.746773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.323 [2024-05-15 11:12:51.746778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.323 [2024-05-15 11:12:51.746782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.323 [2024-05-15 11:12:51.746792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-05-15 11:12:51.756849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.323 [2024-05-15 11:12:51.756894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.323 [2024-05-15 11:12:51.756905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.323 [2024-05-15 11:12:51.756910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.323 [2024-05-15 11:12:51.756915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.323 [2024-05-15 11:12:51.756925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.766855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.766902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.766913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.766918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.766922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.766933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.776872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.776925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.776936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.776941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.776948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.776958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.786892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.786937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.786947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.786952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.786956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.786966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.796904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.796957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.796968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.796972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.796977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.796986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.806962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.807009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.807020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.807025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.807029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.807039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.817021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.817088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.817099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.817104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.817108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.817118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.826972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.827038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.827049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.827053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.827058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.827067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.837081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.837128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.837138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.837143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.837147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.837156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.847090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.847137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.847148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.847152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.847156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.847167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.857025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.857076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.857087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.857092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.857096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.857106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.867166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.867211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.867222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.867230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.867234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.324 [2024-05-15 11:12:51.867244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-05-15 11:12:51.877189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.324 [2024-05-15 11:12:51.877233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.324 [2024-05-15 11:12:51.877244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.324 [2024-05-15 11:12:51.877248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.324 [2024-05-15 11:12:51.877252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.877262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.887148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.887198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.887209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.887214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.887218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.887228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.897341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.897397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.897407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.897412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.897416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.897426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.907127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.907176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.907187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.907191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.907196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.907205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.917310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.917370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.917381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.917386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.917390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.917400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.927303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.927349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.927360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.927365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.927369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.927379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.937307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.937361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.937371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.937376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.937380] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.937390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.947384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.947432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.947443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.947447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.947452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.947461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.957458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.957532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.957543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.957555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.957561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.957573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-05-15 11:12:51.967520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.325 [2024-05-15 11:12:51.967585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.325 [2024-05-15 11:12:51.967596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.325 [2024-05-15 11:12:51.967601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.325 [2024-05-15 11:12:51.967605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.325 [2024-05-15 11:12:51.967615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:51.977402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:51.977455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:51.977465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:51.977469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:51.977474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:51.977483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:51.987537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:51.987590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:51.987600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:51.987605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:51.987609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:51.987619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:51.997585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:51.997627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:51.997637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:51.997642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:51.997646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:51.997656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.007529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.007590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.007601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.007606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.007610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.007620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.017569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.017659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.017669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.017674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.017678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.017688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.027460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.027504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.027515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.027519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.027524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.027533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.037626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.037670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.037680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.037685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.037689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.037699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.047631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.047712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.047725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.047730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.047734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.047744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.057659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.057710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.057720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.057725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.057729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.057739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.067662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.067709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.067719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.067723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.067727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.067737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.077708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.077751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.077762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.077766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.077771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.077780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.087732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.087779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.087789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.087794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.087798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.087810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.097779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.588 [2024-05-15 11:12:52.097827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.588 [2024-05-15 11:12:52.097838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.588 [2024-05-15 11:12:52.097842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.588 [2024-05-15 11:12:52.097846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.588 [2024-05-15 11:12:52.097856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.588 qpair failed and we were unable to recover it. 00:26:55.588 [2024-05-15 11:12:52.107776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.107815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.107826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.107830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.107834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.107844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.117845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.117892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.117902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.117907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.117911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.117921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.127871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.127917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.127928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.127932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.127937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.127946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.137877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.137959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.137972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.137977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.137981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.137991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.147890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.147930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.147941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.147945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.147950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.147959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.157948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.157990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.158000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.158005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.158009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.158019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.167977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.168025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.168035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.168040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.168044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.168054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.178025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.178116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.178126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.178130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.178137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.178147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.187992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.188035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.188045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.188050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.188054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.188063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.198044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.198086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.198097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.198101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.198105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.198115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.208079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.208129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.208139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.208144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.208148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.208158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.218101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.218153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.218164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.218168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.218172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.218182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.228144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.228194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.228204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.228208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.228213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.589 [2024-05-15 11:12:52.228223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.589 qpair failed and we were unable to recover it. 00:26:55.589 [2024-05-15 11:12:52.238155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.589 [2024-05-15 11:12:52.238204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.589 [2024-05-15 11:12:52.238214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.589 [2024-05-15 11:12:52.238219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.589 [2024-05-15 11:12:52.238223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.590 [2024-05-15 11:12:52.238233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.590 qpair failed and we were unable to recover it. 00:26:55.851 [2024-05-15 11:12:52.248157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.851 [2024-05-15 11:12:52.248206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.851 [2024-05-15 11:12:52.248216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.851 [2024-05-15 11:12:52.248221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.851 [2024-05-15 11:12:52.248225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.851 [2024-05-15 11:12:52.248235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.851 qpair failed and we were unable to recover it. 00:26:55.851 [2024-05-15 11:12:52.258185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.851 [2024-05-15 11:12:52.258237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.851 [2024-05-15 11:12:52.258247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.851 [2024-05-15 11:12:52.258252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.851 [2024-05-15 11:12:52.258256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.851 [2024-05-15 11:12:52.258266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.851 qpair failed and we were unable to recover it. 00:26:55.851 [2024-05-15 11:12:52.268239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.851 [2024-05-15 11:12:52.268282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.851 [2024-05-15 11:12:52.268293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.851 [2024-05-15 11:12:52.268300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.851 [2024-05-15 11:12:52.268305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.851 [2024-05-15 11:12:52.268315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.851 qpair failed and we were unable to recover it. 00:26:55.851 [2024-05-15 11:12:52.278256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.851 [2024-05-15 11:12:52.278308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.851 [2024-05-15 11:12:52.278326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.851 [2024-05-15 11:12:52.278332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.851 [2024-05-15 11:12:52.278337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.851 [2024-05-15 11:12:52.278350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.851 qpair failed and we were unable to recover it. 00:26:55.851 [2024-05-15 11:12:52.288297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.851 [2024-05-15 11:12:52.288347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.288365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.288371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.288376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.288389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.298335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.298386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.298404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.298410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.298415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.298427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.308374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.308424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.308436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.308440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.308445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.308455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.318400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.318444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.318456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.318461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.318465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.318475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.328416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.328462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.328473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.328478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.328482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.328492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.338416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.338498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.338509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.338513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.338518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.338527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.348459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.348509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.348519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.348524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.348528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.348538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.358411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.358454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.358466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.358474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.358478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.358488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.368534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.368582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.368593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.368598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.368602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.368613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.378549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.378598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.378608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.378613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.378617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.378627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.388590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.388637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.388647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.388652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.388656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.388666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.398576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.398614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.398625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.398630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.398634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.398644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.408693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.408742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.408752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.408757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.408761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.408771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.418678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.418728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.852 [2024-05-15 11:12:52.418738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.852 [2024-05-15 11:12:52.418743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.852 [2024-05-15 11:12:52.418747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.852 [2024-05-15 11:12:52.418756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.852 qpair failed and we were unable to recover it. 00:26:55.852 [2024-05-15 11:12:52.428708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.852 [2024-05-15 11:12:52.428765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.428775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.428780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.428784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.428794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:55.853 [2024-05-15 11:12:52.438783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.853 [2024-05-15 11:12:52.438827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.438837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.438842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.438846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.438855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:55.853 [2024-05-15 11:12:52.448779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.853 [2024-05-15 11:12:52.448831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.448843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.448848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.448853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.448862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:55.853 [2024-05-15 11:12:52.458782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.853 [2024-05-15 11:12:52.458834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.458844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.458849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.458853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.458863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:55.853 [2024-05-15 11:12:52.468833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.853 [2024-05-15 11:12:52.468880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.468890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.468895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.468899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.468909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:55.853 [2024-05-15 11:12:52.478820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.853 [2024-05-15 11:12:52.478861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.478871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.478876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.478880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.478890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:55.853 [2024-05-15 11:12:52.488875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.853 [2024-05-15 11:12:52.488919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.488930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.488934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.488939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.488951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:55.853 [2024-05-15 11:12:52.498901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.853 [2024-05-15 11:12:52.498966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.853 [2024-05-15 11:12:52.498977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.853 [2024-05-15 11:12:52.498981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.853 [2024-05-15 11:12:52.498985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:55.853 [2024-05-15 11:12:52.498995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.853 qpair failed and we were unable to recover it. 00:26:56.114 [2024-05-15 11:12:52.508905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.114 [2024-05-15 11:12:52.508947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.114 [2024-05-15 11:12:52.508958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.114 [2024-05-15 11:12:52.508962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.114 [2024-05-15 11:12:52.508967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.114 [2024-05-15 11:12:52.508977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.114 qpair failed and we were unable to recover it. 00:26:56.114 [2024-05-15 11:12:52.518933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.114 [2024-05-15 11:12:52.518981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.114 [2024-05-15 11:12:52.518991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.114 [2024-05-15 11:12:52.518996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.114 [2024-05-15 11:12:52.519000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.114 [2024-05-15 11:12:52.519010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.114 qpair failed and we were unable to recover it. 00:26:56.114 [2024-05-15 11:12:52.528988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.114 [2024-05-15 11:12:52.529034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.114 [2024-05-15 11:12:52.529044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.114 [2024-05-15 11:12:52.529049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.114 [2024-05-15 11:12:52.529053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.114 [2024-05-15 11:12:52.529063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.114 qpair failed and we were unable to recover it. 00:26:56.114 [2024-05-15 11:12:52.539016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.114 [2024-05-15 11:12:52.539067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.114 [2024-05-15 11:12:52.539080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.114 [2024-05-15 11:12:52.539084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.114 [2024-05-15 11:12:52.539088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.114 [2024-05-15 11:12:52.539098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.114 qpair failed and we were unable to recover it. 00:26:56.114 [2024-05-15 11:12:52.549037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.114 [2024-05-15 11:12:52.549086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.114 [2024-05-15 11:12:52.549096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.114 [2024-05-15 11:12:52.549101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.114 [2024-05-15 11:12:52.549105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.114 [2024-05-15 11:12:52.549115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.114 qpair failed and we were unable to recover it. 00:26:56.114 [2024-05-15 11:12:52.558999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.114 [2024-05-15 11:12:52.559043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.559053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.559058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.559062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.559071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.569087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.569134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.569144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.569149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.569153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.569163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.579080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.579138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.579149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.579153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.579160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.579170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.589137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.589184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.589194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.589198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.589203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.589212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.599161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.599206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.599217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.599221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.599226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.599235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.609193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.609242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.609253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.609258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.609262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.609272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.619227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.619283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.619301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.619307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.619312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.619325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.629250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.629298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.629316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.629322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.629327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.629340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.639277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.639326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.639345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.639350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.639355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.639368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.649364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.649426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.649438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.649442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.649447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.649457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.659345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.659401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.659419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.659424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.659429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.659442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.669364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.669408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.669419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.669424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.669432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.669443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.679397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.679441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.679453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.679457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.679462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.679472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.689351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.115 [2024-05-15 11:12:52.689402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.115 [2024-05-15 11:12:52.689413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.115 [2024-05-15 11:12:52.689417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.115 [2024-05-15 11:12:52.689421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.115 [2024-05-15 11:12:52.689432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.115 qpair failed and we were unable to recover it. 00:26:56.115 [2024-05-15 11:12:52.699438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.116 [2024-05-15 11:12:52.699488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.116 [2024-05-15 11:12:52.699499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.116 [2024-05-15 11:12:52.699504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.116 [2024-05-15 11:12:52.699508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.116 [2024-05-15 11:12:52.699518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.116 qpair failed and we were unable to recover it. 00:26:56.116 [2024-05-15 11:12:52.709473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.116 [2024-05-15 11:12:52.709521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.116 [2024-05-15 11:12:52.709532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.116 [2024-05-15 11:12:52.709536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.116 [2024-05-15 11:12:52.709540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.116 [2024-05-15 11:12:52.709553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.116 qpair failed and we were unable to recover it. 00:26:56.116 [2024-05-15 11:12:52.719482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.116 [2024-05-15 11:12:52.719541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.116 [2024-05-15 11:12:52.719556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.116 [2024-05-15 11:12:52.719561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.116 [2024-05-15 11:12:52.719565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.116 [2024-05-15 11:12:52.719575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.116 qpair failed and we were unable to recover it. 00:26:56.116 [2024-05-15 11:12:52.729526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.116 [2024-05-15 11:12:52.729576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.116 [2024-05-15 11:12:52.729587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.116 [2024-05-15 11:12:52.729591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.116 [2024-05-15 11:12:52.729595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.116 [2024-05-15 11:12:52.729605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.116 qpair failed and we were unable to recover it. 00:26:56.116 [2024-05-15 11:12:52.739725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.116 [2024-05-15 11:12:52.739794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.116 [2024-05-15 11:12:52.739805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.116 [2024-05-15 11:12:52.739810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.116 [2024-05-15 11:12:52.739814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.116 [2024-05-15 11:12:52.739823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.116 qpair failed and we were unable to recover it. 00:26:56.116 [2024-05-15 11:12:52.749581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.116 [2024-05-15 11:12:52.749628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.116 [2024-05-15 11:12:52.749639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.116 [2024-05-15 11:12:52.749643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.116 [2024-05-15 11:12:52.749647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.116 [2024-05-15 11:12:52.749657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.116 qpair failed and we were unable to recover it. 00:26:56.116 [2024-05-15 11:12:52.759620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.116 [2024-05-15 11:12:52.759667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.116 [2024-05-15 11:12:52.759677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.116 [2024-05-15 11:12:52.759684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.116 [2024-05-15 11:12:52.759688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.116 [2024-05-15 11:12:52.759698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.116 qpair failed and we were unable to recover it. 00:26:56.378 [2024-05-15 11:12:52.769649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.378 [2024-05-15 11:12:52.769694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.378 [2024-05-15 11:12:52.769705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.378 [2024-05-15 11:12:52.769710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.378 [2024-05-15 11:12:52.769714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.378 [2024-05-15 11:12:52.769724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.378 qpair failed and we were unable to recover it. 00:26:56.378 [2024-05-15 11:12:52.779685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.378 [2024-05-15 11:12:52.779739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.378 [2024-05-15 11:12:52.779750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.378 [2024-05-15 11:12:52.779754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.378 [2024-05-15 11:12:52.779759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.378 [2024-05-15 11:12:52.779768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.378 qpair failed and we were unable to recover it. 00:26:56.378 [2024-05-15 11:12:52.789709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.378 [2024-05-15 11:12:52.789776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.378 [2024-05-15 11:12:52.789786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.378 [2024-05-15 11:12:52.789791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.378 [2024-05-15 11:12:52.789795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.378 [2024-05-15 11:12:52.789805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.378 qpair failed and we were unable to recover it. 00:26:56.378 [2024-05-15 11:12:52.799735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.378 [2024-05-15 11:12:52.799782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.378 [2024-05-15 11:12:52.799793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.378 [2024-05-15 11:12:52.799797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.378 [2024-05-15 11:12:52.799802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.378 [2024-05-15 11:12:52.799812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.809741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.809790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.809801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.809806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.809810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.809820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.819794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.819838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.819849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.819853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.819857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.819867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.829805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.829851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.829862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.829866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.829870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.829880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.839843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.839885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.839897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.839901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.839906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.839915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.849878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.849923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.849936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.849941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.849945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.849955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.859903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.859952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.859963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.859967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.859972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.859981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.869914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.869960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.869970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.869975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.869979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.869989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.879951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.879996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.880006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.880011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.880015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.880025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.889924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.889970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.889981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.889985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.889989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.890002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.900022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.900072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.900082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.900087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.900091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.900100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.910042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.910082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.910093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.910097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.910102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.910111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.920079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.920122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.920132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.920137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.379 [2024-05-15 11:12:52.920141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.379 [2024-05-15 11:12:52.920151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.379 qpair failed and we were unable to recover it. 00:26:56.379 [2024-05-15 11:12:52.930108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.379 [2024-05-15 11:12:52.930155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.379 [2024-05-15 11:12:52.930165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.379 [2024-05-15 11:12:52.930170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:52.930175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:52.930186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:52.940119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:52.940169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:52.940183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:52.940187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:52.940191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:52.940201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:52.950143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:52.950216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:52.950226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:52.950230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:52.950234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:52.950244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:52.960174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:52.960222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:52.960233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:52.960237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:52.960242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:52.960251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:52.970213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:52.970259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:52.970270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:52.970275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:52.970279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:52.970289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:52.980238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:52.980283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:52.980293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:52.980298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:52.980302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:52.980314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:52.990264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:52.990307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:52.990317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:52.990321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:52.990325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:52.990335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:53.000289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:53.000341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:53.000359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:53.000364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:53.000369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:53.000382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:53.010307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:53.010359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:53.010377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:53.010383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:53.010387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:53.010401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.380 [2024-05-15 11:12:53.020344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.380 [2024-05-15 11:12:53.020397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.380 [2024-05-15 11:12:53.020416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.380 [2024-05-15 11:12:53.020422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.380 [2024-05-15 11:12:53.020428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.380 [2024-05-15 11:12:53.020441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.380 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.030377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.030427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.649 [2024-05-15 11:12:53.030438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.649 [2024-05-15 11:12:53.030443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.649 [2024-05-15 11:12:53.030447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.649 [2024-05-15 11:12:53.030459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.649 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.040385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.040430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.649 [2024-05-15 11:12:53.040441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.649 [2024-05-15 11:12:53.040446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.649 [2024-05-15 11:12:53.040450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.649 [2024-05-15 11:12:53.040460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.649 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.050434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.050481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.649 [2024-05-15 11:12:53.050491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.649 [2024-05-15 11:12:53.050496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.649 [2024-05-15 11:12:53.050500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.649 [2024-05-15 11:12:53.050510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.649 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.060348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.060442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.649 [2024-05-15 11:12:53.060453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.649 [2024-05-15 11:12:53.060458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.649 [2024-05-15 11:12:53.060462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.649 [2024-05-15 11:12:53.060472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.649 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.070486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.070559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.649 [2024-05-15 11:12:53.070570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.649 [2024-05-15 11:12:53.070575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.649 [2024-05-15 11:12:53.070604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.649 [2024-05-15 11:12:53.070615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.649 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.080509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.080557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.649 [2024-05-15 11:12:53.080568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.649 [2024-05-15 11:12:53.080572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.649 [2024-05-15 11:12:53.080577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.649 [2024-05-15 11:12:53.080587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.649 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.090574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.090624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.649 [2024-05-15 11:12:53.090634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.649 [2024-05-15 11:12:53.090639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.649 [2024-05-15 11:12:53.090644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.649 [2024-05-15 11:12:53.090654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.649 qpair failed and we were unable to recover it. 00:26:56.649 [2024-05-15 11:12:53.100582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.649 [2024-05-15 11:12:53.100634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.650 [2024-05-15 11:12:53.100645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.650 [2024-05-15 11:12:53.100650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.650 [2024-05-15 11:12:53.100654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.650 [2024-05-15 11:12:53.100665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.650 qpair failed and we were unable to recover it. 00:26:56.650 [2024-05-15 11:12:53.110589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.650 [2024-05-15 11:12:53.110634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.650 [2024-05-15 11:12:53.110646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.650 [2024-05-15 11:12:53.110650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.650 [2024-05-15 11:12:53.110655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.650 [2024-05-15 11:12:53.110665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.650 qpair failed and we were unable to recover it. 00:26:56.650 [2024-05-15 11:12:53.120621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.650 [2024-05-15 11:12:53.120665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.650 [2024-05-15 11:12:53.120675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.650 [2024-05-15 11:12:53.120680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.650 [2024-05-15 11:12:53.120684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.650 [2024-05-15 11:12:53.120694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.650 qpair failed and we were unable to recover it. 00:26:56.650 [2024-05-15 11:12:53.130666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.650 [2024-05-15 11:12:53.130709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.650 [2024-05-15 11:12:53.130719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.650 [2024-05-15 11:12:53.130724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.650 [2024-05-15 11:12:53.130728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.651 [2024-05-15 11:12:53.130738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.651 qpair failed and we were unable to recover it. 00:26:56.651 [2024-05-15 11:12:53.140679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.651 [2024-05-15 11:12:53.140736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.651 [2024-05-15 11:12:53.140747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.651 [2024-05-15 11:12:53.140752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.651 [2024-05-15 11:12:53.140756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.651 [2024-05-15 11:12:53.140766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.651 qpair failed and we were unable to recover it. 00:26:56.651 [2024-05-15 11:12:53.150736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.651 [2024-05-15 11:12:53.150806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.651 [2024-05-15 11:12:53.150816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.651 [2024-05-15 11:12:53.150821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.651 [2024-05-15 11:12:53.150825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.651 [2024-05-15 11:12:53.150835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.651 qpair failed and we were unable to recover it. 00:26:56.651 [2024-05-15 11:12:53.160745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.651 [2024-05-15 11:12:53.160787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.651 [2024-05-15 11:12:53.160797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.651 [2024-05-15 11:12:53.160804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.651 [2024-05-15 11:12:53.160808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.651 [2024-05-15 11:12:53.160818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.651 qpair failed and we were unable to recover it. 00:26:56.651 [2024-05-15 11:12:53.170786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.651 [2024-05-15 11:12:53.170878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.651 [2024-05-15 11:12:53.170889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.651 [2024-05-15 11:12:53.170893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.651 [2024-05-15 11:12:53.170897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.651 [2024-05-15 11:12:53.170907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.651 qpair failed and we were unable to recover it. 00:26:56.651 [2024-05-15 11:12:53.180838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.180904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.180915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.180919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.180923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.180933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.190832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.190906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.190917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.190922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.190926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.190935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.200873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.200917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.200928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.200933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.200937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.200946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.210824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.210868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.210879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.210883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.210888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.210897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.220943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.220992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.221002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.221007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.221011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.221021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.230954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.230998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.231008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.231013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.231017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.231027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.240992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.241038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.241049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.241054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.241058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.241068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.251019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.251062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.251076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.251080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.251085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.251095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.261037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.261089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.261100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.261104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.261109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.261118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.271080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.271123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.271133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.652 [2024-05-15 11:12:53.271138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.652 [2024-05-15 11:12:53.271142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.652 [2024-05-15 11:12:53.271151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.652 qpair failed and we were unable to recover it. 00:26:56.652 [2024-05-15 11:12:53.281100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.652 [2024-05-15 11:12:53.281145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.652 [2024-05-15 11:12:53.281156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.653 [2024-05-15 11:12:53.281160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.653 [2024-05-15 11:12:53.281164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.653 [2024-05-15 11:12:53.281174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.653 qpair failed and we were unable to recover it. 00:26:56.653 [2024-05-15 11:12:53.291143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.653 [2024-05-15 11:12:53.291190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.653 [2024-05-15 11:12:53.291199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.653 [2024-05-15 11:12:53.291204] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.653 [2024-05-15 11:12:53.291208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.653 [2024-05-15 11:12:53.291221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.653 qpair failed and we were unable to recover it. 00:26:56.917 [2024-05-15 11:12:53.301157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.917 [2024-05-15 11:12:53.301206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.917 [2024-05-15 11:12:53.301217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.917 [2024-05-15 11:12:53.301221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.917 [2024-05-15 11:12:53.301225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.917 [2024-05-15 11:12:53.301235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.917 qpair failed and we were unable to recover it. 00:26:56.917 [2024-05-15 11:12:53.311184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.311258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.311268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.311273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.311277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.311287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.321226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.321271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.321282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.321286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.321291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.321300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.331263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.331310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.331320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.331325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.331329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.331339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.341321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.341373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.341387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.341391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.341395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.341405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.351306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.351350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.351361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.351366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.351370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.351380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.361317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.361359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.361370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.361374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.361379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.361388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.371319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.371365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.371375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.371380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.371384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.371395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.381354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.381409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.381420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.381425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.381429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.381441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.391282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.391329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.391339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.391344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.391348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.391358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.401311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.401392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.401403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.401408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.401412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.401423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.411486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.411531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.411542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.411551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.411555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.411566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.421511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.421563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.421574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.421578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.918 [2024-05-15 11:12:53.421582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.918 [2024-05-15 11:12:53.421592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.918 qpair failed and we were unable to recover it. 00:26:56.918 [2024-05-15 11:12:53.431521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.918 [2024-05-15 11:12:53.431570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.918 [2024-05-15 11:12:53.431583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.918 [2024-05-15 11:12:53.431588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.431592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.431603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.441559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.441634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.441645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.441650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.441654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.441664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.451588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.451636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.451646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.451651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.451655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.451665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.461660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.461718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.461729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.461734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.461738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.461748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.471576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.471629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.471640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.471644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.471651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.471661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.481686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.481751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.481762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.481767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.481771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.481781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.491595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.491691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.491701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.491706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.491710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.491720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.501734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.501783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.501793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.501798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.501802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.501812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.511756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.511803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.511814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.511818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.511822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.511832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.521768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.521821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.521832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.521837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.521841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.521851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.531718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.531763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.531774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.531779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.531783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.531793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.541826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.541902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.541913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.541917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.919 [2024-05-15 11:12:53.541921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.919 [2024-05-15 11:12:53.541931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.919 qpair failed and we were unable to recover it. 00:26:56.919 [2024-05-15 11:12:53.551748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.919 [2024-05-15 11:12:53.551807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.919 [2024-05-15 11:12:53.551817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.919 [2024-05-15 11:12:53.551822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-05-15 11:12:53.551826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.920 [2024-05-15 11:12:53.551835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:56.920 [2024-05-15 11:12:53.561882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.920 [2024-05-15 11:12:53.561932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.920 [2024-05-15 11:12:53.561943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.920 [2024-05-15 11:12:53.561951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.920 [2024-05-15 11:12:53.561955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:56.920 [2024-05-15 11:12:53.561965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 qpair failed and we were unable to recover it. 00:26:57.182 [2024-05-15 11:12:53.571944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.182 [2024-05-15 11:12:53.571991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.182 [2024-05-15 11:12:53.572002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.182 [2024-05-15 11:12:53.572007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.182 [2024-05-15 11:12:53.572011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.182 [2024-05-15 11:12:53.572021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.182 qpair failed and we were unable to recover it. 00:26:57.182 [2024-05-15 11:12:53.581945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.182 [2024-05-15 11:12:53.582034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.182 [2024-05-15 11:12:53.582045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.182 [2024-05-15 11:12:53.582049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.182 [2024-05-15 11:12:53.582053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.182 [2024-05-15 11:12:53.582063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.182 qpair failed and we were unable to recover it. 00:26:57.182 [2024-05-15 11:12:53.591867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.182 [2024-05-15 11:12:53.591914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.182 [2024-05-15 11:12:53.591925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.182 [2024-05-15 11:12:53.591929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.182 [2024-05-15 11:12:53.591934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.182 [2024-05-15 11:12:53.591943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.182 qpair failed and we were unable to recover it. 00:26:57.182 [2024-05-15 11:12:53.601965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.182 [2024-05-15 11:12:53.602009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.182 [2024-05-15 11:12:53.602019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.182 [2024-05-15 11:12:53.602024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.182 [2024-05-15 11:12:53.602028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.182 [2024-05-15 11:12:53.602038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.182 qpair failed and we were unable to recover it. 00:26:57.182 [2024-05-15 11:12:53.612029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.182 [2024-05-15 11:12:53.612102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.612113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.612117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.612121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.612131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.622055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.622107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.622117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.622121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.622126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.622135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.632084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.632128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.632138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.632143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.632147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.632156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.642102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.642156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.642167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.642171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.642175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.642185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.652143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.652219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.652229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.652237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.652241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.652251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.662195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.662277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.662287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.662292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.662296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.662306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.672187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.672239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.672257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.672263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.672267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.672281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.682287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.682332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.682343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.682348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.682353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.682363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.692298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.692365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.692384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.692389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.692394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.692407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.702293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.702347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.702359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.702364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.702368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.702378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.712348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.712426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.712437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.712441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.712445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.712456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.722351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.722418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.722429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.722434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.722438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.183 [2024-05-15 11:12:53.722448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.183 qpair failed and we were unable to recover it. 00:26:57.183 [2024-05-15 11:12:53.732374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.183 [2024-05-15 11:12:53.732424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.183 [2024-05-15 11:12:53.732435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.183 [2024-05-15 11:12:53.732440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.183 [2024-05-15 11:12:53.732444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.732454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.742404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.742458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.742472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.742477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.742481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.742490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.752427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.752470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.752480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.752485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.752489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.752499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.762450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.762498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.762508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.762513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.762517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.762527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.772442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.772513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.772524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.772528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.772532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.772542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.782378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.782434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.782444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.782449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.782453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.782467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.792528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.792612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.792622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.792627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.792631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.792641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.802561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.802605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.802615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.802620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.802624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.802633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.812583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.812631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.812641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.812646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.812650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.812659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.822615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.822673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.822683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.822688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.822692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.822701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.184 [2024-05-15 11:12:53.832652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.184 [2024-05-15 11:12:53.832700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.184 [2024-05-15 11:12:53.832712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.184 [2024-05-15 11:12:53.832717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.184 [2024-05-15 11:12:53.832721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.184 [2024-05-15 11:12:53.832731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.184 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.842686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.842730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.842740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.842745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.842749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.842758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.852675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.852737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.852748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.852752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.852756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.852766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.862729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.862776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.862787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.862791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.862795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.862805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.872723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.872765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.872775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.872779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.872786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.872796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.882774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.882819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.882829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.882834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.882838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.882847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.892777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.892826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.892836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.892841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.892845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.892854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.902865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.902920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.902930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.902935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.902939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.902949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.912840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.448 [2024-05-15 11:12:53.912884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.448 [2024-05-15 11:12:53.912895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.448 [2024-05-15 11:12:53.912900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.448 [2024-05-15 11:12:53.912904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.448 [2024-05-15 11:12:53.912913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.448 qpair failed and we were unable to recover it. 00:26:57.448 [2024-05-15 11:12:53.922766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.922814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.922825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.922829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.922833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.922843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:53.932969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.933016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.933026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.933031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.933035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.933045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:53.942946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.942995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.943005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.943010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.943014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.943024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:53.952950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.952993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.953003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.953007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.953012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.953021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:53.963010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.963059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.963070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.963077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.963082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.963091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:53.973137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.973198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.973208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.973212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.973216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.973226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:53.982992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.983045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.983056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.983061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.983065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.983075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:53.993141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:53.993185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:53.993195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:53.993200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:53.993204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:53.993214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:54.003168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:54.003213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:54.003223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:54.003228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:54.003232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:54.003242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:54.013148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:54.013197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:54.013207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:54.013212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:54.013216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:54.013226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:54.023184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:54.023233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:54.023246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:54.023250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:54.023255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:54.023266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:54.033205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:54.033247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:54.033258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.449 [2024-05-15 11:12:54.033262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.449 [2024-05-15 11:12:54.033266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.449 [2024-05-15 11:12:54.033276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.449 qpair failed and we were unable to recover it. 00:26:57.449 [2024-05-15 11:12:54.043235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.449 [2024-05-15 11:12:54.043283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.449 [2024-05-15 11:12:54.043294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.450 [2024-05-15 11:12:54.043299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.450 [2024-05-15 11:12:54.043303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.450 [2024-05-15 11:12:54.043313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.450 qpair failed and we were unable to recover it. 00:26:57.450 [2024-05-15 11:12:54.053254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.450 [2024-05-15 11:12:54.053298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.450 [2024-05-15 11:12:54.053309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.450 [2024-05-15 11:12:54.053317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.450 [2024-05-15 11:12:54.053321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.450 [2024-05-15 11:12:54.053331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.450 qpair failed and we were unable to recover it. 00:26:57.450 [2024-05-15 11:12:54.063208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.450 [2024-05-15 11:12:54.063259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.450 [2024-05-15 11:12:54.063269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.450 [2024-05-15 11:12:54.063274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.450 [2024-05-15 11:12:54.063278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.450 [2024-05-15 11:12:54.063288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.450 qpair failed and we were unable to recover it. 00:26:57.450 [2024-05-15 11:12:54.073311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.450 [2024-05-15 11:12:54.073367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.450 [2024-05-15 11:12:54.073385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.450 [2024-05-15 11:12:54.073390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.450 [2024-05-15 11:12:54.073395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.450 [2024-05-15 11:12:54.073408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.450 qpair failed and we were unable to recover it. 00:26:57.450 [2024-05-15 11:12:54.083338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.450 [2024-05-15 11:12:54.083386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.450 [2024-05-15 11:12:54.083405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.450 [2024-05-15 11:12:54.083410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.450 [2024-05-15 11:12:54.083415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.450 [2024-05-15 11:12:54.083428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.450 qpair failed and we were unable to recover it. 00:26:57.450 [2024-05-15 11:12:54.093378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.450 [2024-05-15 11:12:54.093427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.450 [2024-05-15 11:12:54.093439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.450 [2024-05-15 11:12:54.093444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.450 [2024-05-15 11:12:54.093448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.450 [2024-05-15 11:12:54.093459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.450 qpair failed and we were unable to recover it. 00:26:57.713 [2024-05-15 11:12:54.103395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.713 [2024-05-15 11:12:54.103448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.713 [2024-05-15 11:12:54.103459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.713 [2024-05-15 11:12:54.103464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.713 [2024-05-15 11:12:54.103468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.713 [2024-05-15 11:12:54.103478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.713 qpair failed and we were unable to recover it. 00:26:57.713 [2024-05-15 11:12:54.113425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.713 [2024-05-15 11:12:54.113477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.113488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.113493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.113497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.113507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.123447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.123495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.123505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.123510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.123514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.123524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.133477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.133523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.133534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.133538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.133542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.133555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.143520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.143573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.143587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.143592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.143596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.143606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.153511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.153566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.153577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.153581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.153585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.153595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.163537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.163582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.163592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.163597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.163601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.163611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.173577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.173629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.173639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.173644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.173648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.173658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.183626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.183677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.183688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.183692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.183696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.183709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.193642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.193690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.193700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.193705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.193709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.193719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.203663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.203733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.203744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.203749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.203753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.203764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.213704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.213753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.213764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.213769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.213773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.714 [2024-05-15 11:12:54.213783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.714 qpair failed and we were unable to recover it. 00:26:57.714 [2024-05-15 11:12:54.223740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.714 [2024-05-15 11:12:54.223792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.714 [2024-05-15 11:12:54.223803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.714 [2024-05-15 11:12:54.223807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.714 [2024-05-15 11:12:54.223811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.223821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.233745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.233790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.233803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.233808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.233812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.233822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.243795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.243840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.243850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.243855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.243860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.243869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.253694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.253741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.253752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.253756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.253761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.253770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.263878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.263943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.263954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.263958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.263962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.263972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.273877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.273925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.273935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.273940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.273947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.273957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.283895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.283949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.283959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.283964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.283968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.283978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.293844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.293893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.293904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.293908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.293912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.293922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.303963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.304013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.304024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.304028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.304032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.304042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.313892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.313932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.313943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.313947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.313951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.313961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.323889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.323937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.323947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.323952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.323956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.323966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.334067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.334116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.334126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.334131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.334135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.334144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.344078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.344127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.715 [2024-05-15 11:12:54.344138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.715 [2024-05-15 11:12:54.344142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.715 [2024-05-15 11:12:54.344146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.715 [2024-05-15 11:12:54.344156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.715 qpair failed and we were unable to recover it. 00:26:57.715 [2024-05-15 11:12:54.354124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.715 [2024-05-15 11:12:54.354199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.716 [2024-05-15 11:12:54.354209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.716 [2024-05-15 11:12:54.354214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.716 [2024-05-15 11:12:54.354218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.716 [2024-05-15 11:12:54.354229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.716 qpair failed and we were unable to recover it. 00:26:57.716 [2024-05-15 11:12:54.364146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.716 [2024-05-15 11:12:54.364187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.716 [2024-05-15 11:12:54.364197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.716 [2024-05-15 11:12:54.364202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.716 [2024-05-15 11:12:54.364209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.716 [2024-05-15 11:12:54.364219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.716 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.374174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.374220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.374230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.374235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.374239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.374248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.384191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.384240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.384251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.384256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.384260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.384269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.394221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.394266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.394276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.394281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.394285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.394295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.404255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.404296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.404307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.404311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.404315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.404325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.414283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.414335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.414353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.414359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.414363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.414376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.424296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.424347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.424359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.424364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.424368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.424379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.434302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.434352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.434362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.434367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.434371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.434381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.444362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.444453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.444463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.444468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.444472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.444482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.454377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.454460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.454470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.454478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.979 [2024-05-15 11:12:54.454483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.979 [2024-05-15 11:12:54.454493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.979 qpair failed and we were unable to recover it. 00:26:57.979 [2024-05-15 11:12:54.464389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.979 [2024-05-15 11:12:54.464443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.979 [2024-05-15 11:12:54.464453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.979 [2024-05-15 11:12:54.464458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.464462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.464471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.474437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.474484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.474494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.474499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.474503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.474513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.484430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.484484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.484495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.484499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.484504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.484513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.494482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.494528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.494538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.494543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.494550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.494561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.504491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.504543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.504558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.504562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.504566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.504576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.514365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.514408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.514418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.514423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.514427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.514437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.524578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.524622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.524632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.524636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.524641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.524651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.534596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.534645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.534655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.534660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.534664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.534674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.544610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.544658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.544673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.544677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.544681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.544691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.554613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.554656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.554666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.554671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.554675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.554684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.980 qpair failed and we were unable to recover it. 00:26:57.980 [2024-05-15 11:12:54.564630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.980 [2024-05-15 11:12:54.564678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.980 [2024-05-15 11:12:54.564688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.980 [2024-05-15 11:12:54.564693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.980 [2024-05-15 11:12:54.564697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.980 [2024-05-15 11:12:54.564707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.981 qpair failed and we were unable to recover it. 00:26:57.981 [2024-05-15 11:12:54.574691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.981 [2024-05-15 11:12:54.574737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.981 [2024-05-15 11:12:54.574747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.981 [2024-05-15 11:12:54.574752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.981 [2024-05-15 11:12:54.574756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.981 [2024-05-15 11:12:54.574766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.981 qpair failed and we were unable to recover it. 00:26:57.981 [2024-05-15 11:12:54.584732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.981 [2024-05-15 11:12:54.584814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.981 [2024-05-15 11:12:54.584825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.981 [2024-05-15 11:12:54.584829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.981 [2024-05-15 11:12:54.584833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.981 [2024-05-15 11:12:54.584846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.981 qpair failed and we were unable to recover it. 00:26:57.981 [2024-05-15 11:12:54.594686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.981 [2024-05-15 11:12:54.594727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.981 [2024-05-15 11:12:54.594738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.981 [2024-05-15 11:12:54.594742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.981 [2024-05-15 11:12:54.594746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.981 [2024-05-15 11:12:54.594756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.981 qpair failed and we were unable to recover it. 00:26:57.981 [2024-05-15 11:12:54.604679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.981 [2024-05-15 11:12:54.604744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.981 [2024-05-15 11:12:54.604754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.981 [2024-05-15 11:12:54.604759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.981 [2024-05-15 11:12:54.604763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.981 [2024-05-15 11:12:54.604772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.981 qpair failed and we were unable to recover it. 00:26:57.981 [2024-05-15 11:12:54.614683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.981 [2024-05-15 11:12:54.614728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.981 [2024-05-15 11:12:54.614739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.981 [2024-05-15 11:12:54.614744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.981 [2024-05-15 11:12:54.614748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.981 [2024-05-15 11:12:54.614757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.981 qpair failed and we were unable to recover it. 00:26:57.981 [2024-05-15 11:12:54.624862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.981 [2024-05-15 11:12:54.624911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.981 [2024-05-15 11:12:54.624922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.981 [2024-05-15 11:12:54.624927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.981 [2024-05-15 11:12:54.624931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:57.981 [2024-05-15 11:12:54.624941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.981 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.634800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.634840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.634853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.634858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.634862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.634872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.644844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.644880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.644890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.644895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.644899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.644908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.654909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.654953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.654963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.654968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.654972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.654981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.664930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.664977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.664988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.664992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.664997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.665006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.674932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.674971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.674981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.674986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.674990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.675002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.684955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.684994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.685005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.685009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.685014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.685023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.695010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.695056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.695066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.695071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.695075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.695085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.705049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.705099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.705110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.705114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.705119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.705129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.715026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.715064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.715075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.715079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.244 [2024-05-15 11:12:54.715083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.244 [2024-05-15 11:12:54.715093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.244 qpair failed and we were unable to recover it. 00:26:58.244 [2024-05-15 11:12:54.724954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.244 [2024-05-15 11:12:54.725007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.244 [2024-05-15 11:12:54.725017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.244 [2024-05-15 11:12:54.725022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.725026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.725036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.735139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.735185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.735196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.735201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.735205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.735215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.745160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.745210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.745220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.745225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.745229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.745238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.755105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.755146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.755157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.755161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.755165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.755175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.765159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.765210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.765220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.765225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.765232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.765242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.775252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.775301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.775312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.775316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.775320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.775330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.785151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.785203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.785214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.785218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.785222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.785232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.795229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.795268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.795279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.795284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.795288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.795298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.805274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.805314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.805324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.805329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.805333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.805343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.815360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.815408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.815418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.815423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.815428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.815437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.825381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.825429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.825440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.825445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.825449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.825459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.835349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.835389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.245 [2024-05-15 11:12:54.835400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.245 [2024-05-15 11:12:54.835404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.245 [2024-05-15 11:12:54.835408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.245 [2024-05-15 11:12:54.835418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.245 qpair failed and we were unable to recover it. 00:26:58.245 [2024-05-15 11:12:54.845409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.245 [2024-05-15 11:12:54.845447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.246 [2024-05-15 11:12:54.845457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.246 [2024-05-15 11:12:54.845461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.246 [2024-05-15 11:12:54.845466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.246 [2024-05-15 11:12:54.845476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.246 qpair failed and we were unable to recover it. 00:26:58.246 [2024-05-15 11:12:54.855444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.246 [2024-05-15 11:12:54.855528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.246 [2024-05-15 11:12:54.855539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.246 [2024-05-15 11:12:54.855553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.246 [2024-05-15 11:12:54.855557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.246 [2024-05-15 11:12:54.855567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.246 qpair failed and we were unable to recover it. 00:26:58.246 [2024-05-15 11:12:54.865515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.246 [2024-05-15 11:12:54.865571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.246 [2024-05-15 11:12:54.865581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.246 [2024-05-15 11:12:54.865586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.246 [2024-05-15 11:12:54.865590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.246 [2024-05-15 11:12:54.865600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.246 qpair failed and we were unable to recover it. 00:26:58.246 [2024-05-15 11:12:54.875625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.246 [2024-05-15 11:12:54.875665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.246 [2024-05-15 11:12:54.875675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.246 [2024-05-15 11:12:54.875680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.246 [2024-05-15 11:12:54.875684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.246 [2024-05-15 11:12:54.875693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.246 qpair failed and we were unable to recover it. 00:26:58.246 [2024-05-15 11:12:54.885539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.246 [2024-05-15 11:12:54.885581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.246 [2024-05-15 11:12:54.885591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.246 [2024-05-15 11:12:54.885596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.246 [2024-05-15 11:12:54.885600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.246 [2024-05-15 11:12:54.885610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.246 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.895521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.895572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.895583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.895588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.895592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.895603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.905564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.905613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.905624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.905628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.905633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.905643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.915630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.915671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.915681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.915686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.915690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.915700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.925627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.925697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.925708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.925712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.925717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.925727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.935715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.935760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.935770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.935775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.935779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.935789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.945708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.945766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.945780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.945785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.945789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.945799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.955700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.955740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.955751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.955756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.955760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.955770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.965681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.965719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.965729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.965734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.965738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.965747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.975698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.975745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.975757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.975761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.975766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.975776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.985868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.985913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.985926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.985931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.985935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.985948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.509 [2024-05-15 11:12:54.995821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.509 [2024-05-15 11:12:54.995861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.509 [2024-05-15 11:12:54.995872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.509 [2024-05-15 11:12:54.995877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.509 [2024-05-15 11:12:54.995881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.509 [2024-05-15 11:12:54.995891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.509 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.005868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.005907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.005918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.005922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.005927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.005936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.016020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.016069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.016080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.016085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.016089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.016099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.025788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.025834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.025844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.025849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.025853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.025863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.035931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.035969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.035982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.035987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.035991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.036000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.045987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.046025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.046035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.046040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.046044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.046053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.055995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.056036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.056047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.056051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.056055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.056065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.066024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.066065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.066076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.066081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.066085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.066095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.076028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.076063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.076073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.076078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.076082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.076094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.086071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.086109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.086119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.086124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.086128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.086138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.096020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.096059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.096070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.096075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.096079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.096089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.106129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.106174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.106185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.106190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.106194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.106203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.510 [2024-05-15 11:12:55.116144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.510 [2024-05-15 11:12:55.116187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.510 [2024-05-15 11:12:55.116197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.510 [2024-05-15 11:12:55.116201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.510 [2024-05-15 11:12:55.116206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.510 [2024-05-15 11:12:55.116216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.510 qpair failed and we were unable to recover it. 00:26:58.511 [2024-05-15 11:12:55.126185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.511 [2024-05-15 11:12:55.126232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.511 [2024-05-15 11:12:55.126245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.511 [2024-05-15 11:12:55.126250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.511 [2024-05-15 11:12:55.126254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.511 [2024-05-15 11:12:55.126264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.511 qpair failed and we were unable to recover it. 00:26:58.511 [2024-05-15 11:12:55.136176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.511 [2024-05-15 11:12:55.136217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.511 [2024-05-15 11:12:55.136235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.511 [2024-05-15 11:12:55.136241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.511 [2024-05-15 11:12:55.136245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.511 [2024-05-15 11:12:55.136258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.511 qpair failed and we were unable to recover it. 00:26:58.511 [2024-05-15 11:12:55.146285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.511 [2024-05-15 11:12:55.146351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.511 [2024-05-15 11:12:55.146363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.511 [2024-05-15 11:12:55.146368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.511 [2024-05-15 11:12:55.146372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.511 [2024-05-15 11:12:55.146383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.511 qpair failed and we were unable to recover it. 00:26:58.511 [2024-05-15 11:12:55.156252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.511 [2024-05-15 11:12:55.156295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.511 [2024-05-15 11:12:55.156313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.511 [2024-05-15 11:12:55.156318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.511 [2024-05-15 11:12:55.156323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.511 [2024-05-15 11:12:55.156337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.511 qpair failed and we were unable to recover it. 00:26:58.774 [2024-05-15 11:12:55.166246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.774 [2024-05-15 11:12:55.166288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.774 [2024-05-15 11:12:55.166307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.774 [2024-05-15 11:12:55.166313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.774 [2024-05-15 11:12:55.166321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.774 [2024-05-15 11:12:55.166334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.774 qpair failed and we were unable to recover it. 00:26:58.774 [2024-05-15 11:12:55.176271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.774 [2024-05-15 11:12:55.176311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.774 [2024-05-15 11:12:55.176323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.774 [2024-05-15 11:12:55.176327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.774 [2024-05-15 11:12:55.176332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.774 [2024-05-15 11:12:55.176342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.186335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.186426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.186437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.186442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.186446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.186456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.196359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.196399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.196410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.196414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.196418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.196428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.206244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.206278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.206288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.206293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.206297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.206307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.216415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.216457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.216467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.216472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.216476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.216486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.226451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.226499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.226510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.226514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.226519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.226528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.236465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.236504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.236515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.236520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.236524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.236534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.246496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.246536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.246552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.246557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.246561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.246572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.256532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.256611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.256621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.256631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.256635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.256645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.266427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.266472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.266483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.266487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.266491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.266501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.276601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.775 [2024-05-15 11:12:55.276660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.775 [2024-05-15 11:12:55.276671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.775 [2024-05-15 11:12:55.276675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.775 [2024-05-15 11:12:55.276680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.775 [2024-05-15 11:12:55.276690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.775 qpair failed and we were unable to recover it. 00:26:58.775 [2024-05-15 11:12:55.286617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.286658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.286668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.286672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.286677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.286687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.296614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.296652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.296663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.296667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.296671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.296681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.306647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.306717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.306727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.306732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.306736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.306746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.316695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.316740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.316750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.316755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.316759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.316769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.326749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.326830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.326841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.326845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.326850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.326859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.336752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.336790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.336800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.336805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.336809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.336819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.346789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.346828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.346838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.346846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.346850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.346860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.356786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.356829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.356839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.356844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.356848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.356858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.366807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.776 [2024-05-15 11:12:55.366845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.776 [2024-05-15 11:12:55.366856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.776 [2024-05-15 11:12:55.366861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.776 [2024-05-15 11:12:55.366865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.776 [2024-05-15 11:12:55.366874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.776 qpair failed and we were unable to recover it. 00:26:58.776 [2024-05-15 11:12:55.376749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.777 [2024-05-15 11:12:55.376806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.777 [2024-05-15 11:12:55.376817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.777 [2024-05-15 11:12:55.376822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.777 [2024-05-15 11:12:55.376826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.777 [2024-05-15 11:12:55.376836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.777 qpair failed and we were unable to recover it. 00:26:58.777 [2024-05-15 11:12:55.386892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.777 [2024-05-15 11:12:55.386939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.777 [2024-05-15 11:12:55.386950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.777 [2024-05-15 11:12:55.386955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.777 [2024-05-15 11:12:55.386959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.777 [2024-05-15 11:12:55.386969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.777 qpair failed and we were unable to recover it. 00:26:58.777 [2024-05-15 11:12:55.396913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.777 [2024-05-15 11:12:55.396954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.777 [2024-05-15 11:12:55.396964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.777 [2024-05-15 11:12:55.396969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.777 [2024-05-15 11:12:55.396973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.777 [2024-05-15 11:12:55.396983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.777 qpair failed and we were unable to recover it. 00:26:58.777 [2024-05-15 11:12:55.406933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.777 [2024-05-15 11:12:55.406977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.777 [2024-05-15 11:12:55.406987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.777 [2024-05-15 11:12:55.406992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.777 [2024-05-15 11:12:55.406996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.777 [2024-05-15 11:12:55.407005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.777 qpair failed and we were unable to recover it. 00:26:58.777 [2024-05-15 11:12:55.416968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.777 [2024-05-15 11:12:55.417008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.777 [2024-05-15 11:12:55.417018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.777 [2024-05-15 11:12:55.417023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.777 [2024-05-15 11:12:55.417027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:58.777 [2024-05-15 11:12:55.417037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.777 qpair failed and we were unable to recover it. 00:26:59.039 [2024-05-15 11:12:55.427000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.039 [2024-05-15 11:12:55.427045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.039 [2024-05-15 11:12:55.427055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.039 [2024-05-15 11:12:55.427059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.039 [2024-05-15 11:12:55.427063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.039 [2024-05-15 11:12:55.427073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-05-15 11:12:55.437004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.039 [2024-05-15 11:12:55.437039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.039 [2024-05-15 11:12:55.437052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.039 [2024-05-15 11:12:55.437056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.039 [2024-05-15 11:12:55.437061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.039 [2024-05-15 11:12:55.437070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-05-15 11:12:55.447045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.039 [2024-05-15 11:12:55.447085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.039 [2024-05-15 11:12:55.447096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.039 [2024-05-15 11:12:55.447100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.039 [2024-05-15 11:12:55.447105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.039 [2024-05-15 11:12:55.447114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-05-15 11:12:55.457076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.039 [2024-05-15 11:12:55.457117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.039 [2024-05-15 11:12:55.457127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.039 [2024-05-15 11:12:55.457131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.039 [2024-05-15 11:12:55.457136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.039 [2024-05-15 11:12:55.457145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-05-15 11:12:55.467103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.039 [2024-05-15 11:12:55.467142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.039 [2024-05-15 11:12:55.467153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.039 [2024-05-15 11:12:55.467157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.039 [2024-05-15 11:12:55.467161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.039 [2024-05-15 11:12:55.467171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-05-15 11:12:55.477093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.039 [2024-05-15 11:12:55.477128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.039 [2024-05-15 11:12:55.477138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.039 [2024-05-15 11:12:55.477143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.039 [2024-05-15 11:12:55.477148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.039 [2024-05-15 11:12:55.477160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.487042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.487090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.487100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.487104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.487108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.487118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.497182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.497218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.497228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.497233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.497237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.497247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.507225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.507264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.507275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.507279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.507283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.507293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.517204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.517242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.517253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.517257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.517262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.517271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.527272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.527309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.527322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.527326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.527331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.527340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.537292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.537332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.537343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.537347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.537352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.537361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.547204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.547260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.547270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.547274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.547279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.547288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.557337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.557376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.557387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.557392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.557396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.557406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.567367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.567407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.567417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.567422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.567429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.567439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.577408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.577448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.577458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.577463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.577467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.577476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.587422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.587465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.587475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.587480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.587484] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.587494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-05-15 11:12:55.597446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.040 [2024-05-15 11:12:55.597520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.040 [2024-05-15 11:12:55.597531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.040 [2024-05-15 11:12:55.597536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.040 [2024-05-15 11:12:55.597540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.040 [2024-05-15 11:12:55.597557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.607483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.607526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.607537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.607542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.607549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.607559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.617481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.617520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.617530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.617535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.617539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.617551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.627535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.627583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.627593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.627598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.627602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.627612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.637563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.637615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.637625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.637629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.637634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.637643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.647590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.647631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.647642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.647646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.647651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.647661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.657622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.657661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.657672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.657679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.657684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.657693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.667648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.667692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.667702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.667707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.667711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.667721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.677662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.677733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.677743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.677748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.677752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.677762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-05-15 11:12:55.687695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.041 [2024-05-15 11:12:55.687733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.041 [2024-05-15 11:12:55.687744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.041 [2024-05-15 11:12:55.687748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.041 [2024-05-15 11:12:55.687752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.041 [2024-05-15 11:12:55.687762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.303 [2024-05-15 11:12:55.697603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.303 [2024-05-15 11:12:55.697645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.303 [2024-05-15 11:12:55.697655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.303 [2024-05-15 11:12:55.697659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.303 [2024-05-15 11:12:55.697663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.303 [2024-05-15 11:12:55.697673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-05-15 11:12:55.707728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.303 [2024-05-15 11:12:55.707772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.303 [2024-05-15 11:12:55.707783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.303 [2024-05-15 11:12:55.707787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.303 [2024-05-15 11:12:55.707792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.303 [2024-05-15 11:12:55.707801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-05-15 11:12:55.717749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.303 [2024-05-15 11:12:55.717782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.303 [2024-05-15 11:12:55.717793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.303 [2024-05-15 11:12:55.717797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.303 [2024-05-15 11:12:55.717801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.303 [2024-05-15 11:12:55.717811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.303 qpair failed and we were unable to recover it. 00:26:59.303 [2024-05-15 11:12:55.727805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.303 [2024-05-15 11:12:55.727843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.303 [2024-05-15 11:12:55.727853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.303 [2024-05-15 11:12:55.727858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.303 [2024-05-15 11:12:55.727862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.727872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.737714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.737762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.737772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.737777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.737781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.737790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.747862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.747901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.747912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.747919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.747923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.747932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.757886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.757933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.757943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.757948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.757952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.757962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.767961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.768034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.768045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.768049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.768053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.768063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.777934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.777975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.777985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.777990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.777994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.778004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.787967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.788009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.788019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.788024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.788028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.788038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.797941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.797980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.797991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.797995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.798000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.798009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.808003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.808035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.808045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.808050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.808054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.808064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.818047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.818083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.818093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.818098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.818102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.818112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.828072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.828112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.828122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.828127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.828131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.828141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.838102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.838165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.838178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.838183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.838188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.838197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.304 [2024-05-15 11:12:55.848144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.304 [2024-05-15 11:12:55.848180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.304 [2024-05-15 11:12:55.848190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.304 [2024-05-15 11:12:55.848195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.304 [2024-05-15 11:12:55.848199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.304 [2024-05-15 11:12:55.848209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.304 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.858172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.858210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.858220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.858225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.858229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.858239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.868187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.868226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.868236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.868241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.868245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.868254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.878205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.878241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.878251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.878256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.878260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.878273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.888231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.888287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.888297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.888301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.888305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.888315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.898249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.898288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.898298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.898302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.898306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.898316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.908283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.908322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.908332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.908337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.908341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.908350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.918331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.918384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.918394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.918399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.918403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.918412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.928337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.928380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.928393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.928397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.928401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.928411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.938371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.938410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.938421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.938425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.938429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.938439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.305 [2024-05-15 11:12:55.948381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.305 [2024-05-15 11:12:55.948427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.305 [2024-05-15 11:12:55.948437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.305 [2024-05-15 11:12:55.948442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.305 [2024-05-15 11:12:55.948446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.305 [2024-05-15 11:12:55.948455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.305 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:55.958422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:55.958462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:55.958472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:55.958477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:55.958481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:55.958491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:55.968451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:55.968484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:55.968495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:55.968499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:55.968508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:55.968517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:55.978465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:55.978562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:55.978574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:55.978579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:55.978583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:55.978595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:55.988518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:55.988605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:55.988616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:55.988621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:55.988626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:55.988637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:55.998515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:55.998556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:55.998566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:55.998571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:55.998575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:55.998585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:56.008550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:56.008586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:56.008596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:56.008601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:56.008605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:56.008615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:56.018581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:56.018631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:56.018642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:56.018647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:56.018651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:56.018661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:56.028615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:56.028687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:56.028697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:56.028701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:56.028706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:56.028715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:56.038633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.569 [2024-05-15 11:12:56.038674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.569 [2024-05-15 11:12:56.038684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.569 [2024-05-15 11:12:56.038688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.569 [2024-05-15 11:12:56.038693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.569 [2024-05-15 11:12:56.038702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.569 qpair failed and we were unable to recover it. 00:26:59.569 [2024-05-15 11:12:56.048661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.048705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.048715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.048720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.048724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.048733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.058676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.058715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.058725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.058730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.058737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.058747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.068674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.068715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.068725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.068730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.068734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.068743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.078759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.078832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.078843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.078848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.078852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.078862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.088758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.088812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.088822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.088827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.088831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.088841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.098781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.098828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.098838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.098843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.098847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.098856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.108808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.108855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.108866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.108871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.108875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.108885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.118848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.118918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.118928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.118933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.118937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.118947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.128851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.128895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.128905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.128910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.128914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.128924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.138866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.138906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.138916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.138921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.138925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.138935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.148925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.148965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.148974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.148982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.148986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.148995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.158943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.570 [2024-05-15 11:12:56.158982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.570 [2024-05-15 11:12:56.158993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.570 [2024-05-15 11:12:56.158997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.570 [2024-05-15 11:12:56.159002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.570 [2024-05-15 11:12:56.159011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.570 qpair failed and we were unable to recover it. 00:26:59.570 [2024-05-15 11:12:56.168971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.571 [2024-05-15 11:12:56.169008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.571 [2024-05-15 11:12:56.169018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.571 [2024-05-15 11:12:56.169022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.571 [2024-05-15 11:12:56.169027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.571 [2024-05-15 11:12:56.169036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.571 qpair failed and we were unable to recover it. 00:26:59.571 [2024-05-15 11:12:56.179014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.571 [2024-05-15 11:12:56.179053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.571 [2024-05-15 11:12:56.179063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.571 [2024-05-15 11:12:56.179067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.571 [2024-05-15 11:12:56.179071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.571 [2024-05-15 11:12:56.179081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.571 qpair failed and we were unable to recover it. 00:26:59.571 [2024-05-15 11:12:56.189016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.571 [2024-05-15 11:12:56.189056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.571 [2024-05-15 11:12:56.189066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.571 [2024-05-15 11:12:56.189070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.571 [2024-05-15 11:12:56.189074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.571 [2024-05-15 11:12:56.189084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.571 qpair failed and we were unable to recover it. 00:26:59.571 [2024-05-15 11:12:56.199063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.571 [2024-05-15 11:12:56.199102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.571 [2024-05-15 11:12:56.199112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.571 [2024-05-15 11:12:56.199117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.571 [2024-05-15 11:12:56.199121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.571 [2024-05-15 11:12:56.199131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.571 qpair failed and we were unable to recover it. 00:26:59.571 [2024-05-15 11:12:56.209043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.571 [2024-05-15 11:12:56.209081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.571 [2024-05-15 11:12:56.209091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.571 [2024-05-15 11:12:56.209096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.571 [2024-05-15 11:12:56.209100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.571 [2024-05-15 11:12:56.209109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.571 qpair failed and we were unable to recover it. 00:26:59.571 [2024-05-15 11:12:56.219114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.571 [2024-05-15 11:12:56.219150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.571 [2024-05-15 11:12:56.219160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.571 [2024-05-15 11:12:56.219165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.571 [2024-05-15 11:12:56.219169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.571 [2024-05-15 11:12:56.219179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.571 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.229146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.229197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.229207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.229212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.229216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.833 [2024-05-15 11:12:56.229225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.833 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.239153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.239192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.239204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.239209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.239213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.833 [2024-05-15 11:12:56.239223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.833 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.249193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.249227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.249237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.249242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.249246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.833 [2024-05-15 11:12:56.249255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.833 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.259229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.259271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.259289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.259294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.259299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.833 [2024-05-15 11:12:56.259312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.833 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.269384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.269439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.269457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.269462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.269467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.833 [2024-05-15 11:12:56.269480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.833 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.279256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.279296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.279307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.279312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.279316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.833 [2024-05-15 11:12:56.279331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.833 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.289305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.289345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.289355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.289360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.289364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.833 [2024-05-15 11:12:56.289374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.833 qpair failed and we were unable to recover it. 00:26:59.833 [2024-05-15 11:12:56.299342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.833 [2024-05-15 11:12:56.299382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.833 [2024-05-15 11:12:56.299392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.833 [2024-05-15 11:12:56.299397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.833 [2024-05-15 11:12:56.299401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.299411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.309368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.309414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.309424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.309429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.309433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.309443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.319381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.319430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.319440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.319445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.319449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.319459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.329409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.329449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.329462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.329467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.329471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.329481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.339329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.339365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.339375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.339379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.339383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.339393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.349449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.349489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.349499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.349504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.349508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.349517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.359495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.359538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.359553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.359557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.359562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.359572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.369385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.369449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.369459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.369464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.369468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.369481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.379563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.379643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.379654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.379658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.379663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.379672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.389538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.389581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.389592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.389596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.389600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.389610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.399609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.399649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.399659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.399664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.399668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.399677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.409580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.409618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.409628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.409632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.409637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.409647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.419648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.834 [2024-05-15 11:12:56.419689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.834 [2024-05-15 11:12:56.419700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.834 [2024-05-15 11:12:56.419704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.834 [2024-05-15 11:12:56.419709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.834 [2024-05-15 11:12:56.419718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.834 qpair failed and we were unable to recover it. 00:26:59.834 [2024-05-15 11:12:56.429665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.835 [2024-05-15 11:12:56.429750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.835 [2024-05-15 11:12:56.429761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.835 [2024-05-15 11:12:56.429765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.835 [2024-05-15 11:12:56.429769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.835 [2024-05-15 11:12:56.429779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.835 qpair failed and we were unable to recover it. 00:26:59.835 [2024-05-15 11:12:56.439706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.835 [2024-05-15 11:12:56.439748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.835 [2024-05-15 11:12:56.439758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.835 [2024-05-15 11:12:56.439763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.835 [2024-05-15 11:12:56.439767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.835 [2024-05-15 11:12:56.439777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.835 qpair failed and we were unable to recover it. 00:26:59.835 [2024-05-15 11:12:56.449629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.835 [2024-05-15 11:12:56.449686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.835 [2024-05-15 11:12:56.449696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.835 [2024-05-15 11:12:56.449700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.835 [2024-05-15 11:12:56.449705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.835 [2024-05-15 11:12:56.449714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.835 qpair failed and we were unable to recover it. 00:26:59.835 [2024-05-15 11:12:56.459774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.835 [2024-05-15 11:12:56.459813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.835 [2024-05-15 11:12:56.459823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.835 [2024-05-15 11:12:56.459828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.835 [2024-05-15 11:12:56.459835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.835 [2024-05-15 11:12:56.459844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.835 qpair failed and we were unable to recover it. 00:26:59.835 [2024-05-15 11:12:56.469816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.835 [2024-05-15 11:12:56.469858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.835 [2024-05-15 11:12:56.469869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.835 [2024-05-15 11:12:56.469873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.835 [2024-05-15 11:12:56.469877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.835 [2024-05-15 11:12:56.469887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.835 qpair failed and we were unable to recover it. 00:26:59.835 [2024-05-15 11:12:56.479793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.835 [2024-05-15 11:12:56.479831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.835 [2024-05-15 11:12:56.479841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.835 [2024-05-15 11:12:56.479846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.835 [2024-05-15 11:12:56.479850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:26:59.835 [2024-05-15 11:12:56.479859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.835 qpair failed and we were unable to recover it. 00:27:00.097 [2024-05-15 11:12:56.489841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.097 [2024-05-15 11:12:56.489879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.097 [2024-05-15 11:12:56.489889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.489894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.489898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.489907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.499885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.499928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.499938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.499943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.499947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.499957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.509961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.510000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.510011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.510015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.510019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.510029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.519932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.519977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.519987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.519992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.519996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.520006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.529964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.530002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.530013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.530017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.530022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.530031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.540000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.540039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.540050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.540055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.540060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.540069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.550024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.550070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.550080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.550088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.550092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.550102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.559910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.559947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.559957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.559962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.559966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.559976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.570063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.570101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.570111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.570116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.570120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.570129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.580013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.580057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.580067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.580072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.580076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.580086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.590126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.590172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.590182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.590187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.590191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.590200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.600174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.600211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.600221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.600226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.600230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.600240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.098 qpair failed and we were unable to recover it. 00:27:00.098 [2024-05-15 11:12:56.610144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.098 [2024-05-15 11:12:56.610177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.098 [2024-05-15 11:12:56.610187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.098 [2024-05-15 11:12:56.610192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.098 [2024-05-15 11:12:56.610196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.098 [2024-05-15 11:12:56.610206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.620213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.620252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.620262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.620267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.620271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.620281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.630234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.630321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.630331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.630336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.630340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.630349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.640235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.640272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.640285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.640290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.640294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.640304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.650283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.650323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.650334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.650339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.650343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.650353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.660314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.660360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.660370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.660375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.660379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.660389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.670393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.670434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.670444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.670449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.670453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.670462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.680367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.680406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.680416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.680421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.680425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.680438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.690358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.690397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.690408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.690412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.690417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.690426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.700449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.700499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.700509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.700513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.700517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.700527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.710334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.710380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.710392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.710397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.710401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.710411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.720480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.720549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.720560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.720565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.720569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.720579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.730513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.730554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.099 [2024-05-15 11:12:56.730569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.099 [2024-05-15 11:12:56.730573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.099 [2024-05-15 11:12:56.730578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.099 [2024-05-15 11:12:56.730588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.099 qpair failed and we were unable to recover it. 00:27:00.099 [2024-05-15 11:12:56.740552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.099 [2024-05-15 11:12:56.740593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.100 [2024-05-15 11:12:56.740604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.100 [2024-05-15 11:12:56.740608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.100 [2024-05-15 11:12:56.740612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.100 [2024-05-15 11:12:56.740622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.100 qpair failed and we were unable to recover it. 00:27:00.362 [2024-05-15 11:12:56.750579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.362 [2024-05-15 11:12:56.750648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.362 [2024-05-15 11:12:56.750658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.362 [2024-05-15 11:12:56.750663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.362 [2024-05-15 11:12:56.750667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.362 [2024-05-15 11:12:56.750676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.362 qpair failed and we were unable to recover it. 00:27:00.362 [2024-05-15 11:12:56.760595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.362 [2024-05-15 11:12:56.760674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.362 [2024-05-15 11:12:56.760685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.362 [2024-05-15 11:12:56.760689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.362 [2024-05-15 11:12:56.760693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.362 [2024-05-15 11:12:56.760703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.362 qpair failed and we were unable to recover it. 00:27:00.362 [2024-05-15 11:12:56.770625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.362 [2024-05-15 11:12:56.770663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.362 [2024-05-15 11:12:56.770673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.770678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.770682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.770695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.780645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.780685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.780695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.780700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.780704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.780714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.790676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.790717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.790727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.790732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.790736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.790745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.800707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.800749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.800759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.800764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.800768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.800778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.810709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.810747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.810758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.810763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.810767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.810776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.820725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.820761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.820775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.820780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.820784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.820793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.830784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.830829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.830839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.830844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.830848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.830858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.840795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.840859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.840869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.840874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.840878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.840887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.850833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.850870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.850881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.850886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.850890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.850900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.860855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.860893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.860904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.860908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.860915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.363 [2024-05-15 11:12:56.860925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.363 qpair failed and we were unable to recover it. 00:27:00.363 [2024-05-15 11:12:56.870863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.363 [2024-05-15 11:12:56.870901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.363 [2024-05-15 11:12:56.870911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.363 [2024-05-15 11:12:56.870916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.363 [2024-05-15 11:12:56.870920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.870930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.880899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.880940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.880950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.880955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.880959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.880969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.890914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.890954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.890964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.890969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.890973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.890983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.900960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.900998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.901008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.901013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.901017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.901027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.910984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.911030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.911040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.911045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.911049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.911059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.920980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.921020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.921031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.921035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.921039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.921049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.931079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.931120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.931131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.931137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.931142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.931152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.941075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.941116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.941126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.941131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.941135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.941145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.951089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.951135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.951145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.951152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.951156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.951166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.364 [2024-05-15 11:12:56.961025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.364 [2024-05-15 11:12:56.961063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.364 [2024-05-15 11:12:56.961075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.364 [2024-05-15 11:12:56.961079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.364 [2024-05-15 11:12:56.961084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.364 [2024-05-15 11:12:56.961093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.364 qpair failed and we were unable to recover it. 00:27:00.365 [2024-05-15 11:12:56.971158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.365 [2024-05-15 11:12:56.971197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.365 [2024-05-15 11:12:56.971207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.365 [2024-05-15 11:12:56.971211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.365 [2024-05-15 11:12:56.971216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.365 [2024-05-15 11:12:56.971226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.365 qpair failed and we were unable to recover it. 00:27:00.365 [2024-05-15 11:12:56.981172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.365 [2024-05-15 11:12:56.981214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.365 [2024-05-15 11:12:56.981224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.365 [2024-05-15 11:12:56.981229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.365 [2024-05-15 11:12:56.981233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.365 [2024-05-15 11:12:56.981242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.365 qpair failed and we were unable to recover it. 00:27:00.365 [2024-05-15 11:12:56.991190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.365 [2024-05-15 11:12:56.991242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.365 [2024-05-15 11:12:56.991252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.365 [2024-05-15 11:12:56.991257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.365 [2024-05-15 11:12:56.991261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.365 [2024-05-15 11:12:56.991271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.365 qpair failed and we were unable to recover it. 00:27:00.365 [2024-05-15 11:12:57.001223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.365 [2024-05-15 11:12:57.001263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.365 [2024-05-15 11:12:57.001274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.365 [2024-05-15 11:12:57.001279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.365 [2024-05-15 11:12:57.001283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.365 [2024-05-15 11:12:57.001293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.365 qpair failed and we were unable to recover it. 00:27:00.365 [2024-05-15 11:12:57.011222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.365 [2024-05-15 11:12:57.011264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.365 [2024-05-15 11:12:57.011275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.365 [2024-05-15 11:12:57.011280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.365 [2024-05-15 11:12:57.011284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.365 [2024-05-15 11:12:57.011294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.365 qpair failed and we were unable to recover it. 00:27:00.626 [2024-05-15 11:12:57.021271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.626 [2024-05-15 11:12:57.021313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.626 [2024-05-15 11:12:57.021331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.626 [2024-05-15 11:12:57.021336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.626 [2024-05-15 11:12:57.021341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.626 [2024-05-15 11:12:57.021354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.626 qpair failed and we were unable to recover it. 00:27:00.626 [2024-05-15 11:12:57.031310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.626 [2024-05-15 11:12:57.031359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.626 [2024-05-15 11:12:57.031370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.031375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.031379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.627 [2024-05-15 11:12:57.031390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 [2024-05-15 11:12:57.041291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.627 [2024-05-15 11:12:57.041327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.627 [2024-05-15 11:12:57.041340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.041348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.041353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.627 [2024-05-15 11:12:57.041364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 [2024-05-15 11:12:57.051332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.627 [2024-05-15 11:12:57.051369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.627 [2024-05-15 11:12:57.051380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.051385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.051389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.627 [2024-05-15 11:12:57.051399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 [2024-05-15 11:12:57.061390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.627 [2024-05-15 11:12:57.061440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.627 [2024-05-15 11:12:57.061451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.061456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.061460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.627 [2024-05-15 11:12:57.061471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 [2024-05-15 11:12:57.071389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.627 [2024-05-15 11:12:57.071433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.627 [2024-05-15 11:12:57.071443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.071448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.071452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.627 [2024-05-15 11:12:57.071462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 [2024-05-15 11:12:57.081456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.627 [2024-05-15 11:12:57.081496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.627 [2024-05-15 11:12:57.081507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.081511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.081515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6968000b90 00:27:00.627 [2024-05-15 11:12:57.081526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Write completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 [2024-05-15 11:12:57.082351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:00.627 [2024-05-15 11:12:57.091530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.627 [2024-05-15 11:12:57.091632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.627 [2024-05-15 11:12:57.091680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.091702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.091722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6960000b90 00:27:00.627 [2024-05-15 11:12:57.091769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 [2024-05-15 11:12:57.101489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.627 [2024-05-15 11:12:57.101567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.627 [2024-05-15 11:12:57.101599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.627 [2024-05-15 11:12:57.101614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.627 [2024-05-15 11:12:57.101628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6960000b90 00:27:00.627 [2024-05-15 11:12:57.101658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:00.627 qpair failed and we were unable to recover it. 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.627 starting I/O failed 00:27:00.627 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 [2024-05-15 11:12:57.102567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.628 [2024-05-15 11:12:57.102869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f26ec0 is same with the state(5) to be set 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Read completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 Write completed with error (sct=0, sc=8) 00:27:00.628 starting I/O failed 00:27:00.628 [2024-05-15 11:12:57.103349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.628 [2024-05-15 11:12:57.111529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.628 [2024-05-15 11:12:57.111618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.628 [2024-05-15 11:12:57.111668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.628 [2024-05-15 11:12:57.111691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.628 [2024-05-15 11:12:57.111710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6970000b90 00:27:00.628 [2024-05-15 11:12:57.111756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.628 qpair failed and we were unable to recover it. 00:27:00.628 [2024-05-15 11:12:57.121553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.628 [2024-05-15 11:12:57.121629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.628 [2024-05-15 11:12:57.121659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.628 [2024-05-15 11:12:57.121674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.628 [2024-05-15 11:12:57.121687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6970000b90 00:27:00.628 [2024-05-15 11:12:57.121718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.628 qpair failed and we were unable to recover it. 00:27:00.628 [2024-05-15 11:12:57.131626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.628 [2024-05-15 11:12:57.131709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.628 [2024-05-15 11:12:57.131733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.628 [2024-05-15 11:12:57.131742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.628 [2024-05-15 11:12:57.131749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f30350 00:27:00.628 [2024-05-15 11:12:57.131767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.628 qpair failed and we were unable to recover it. 00:27:00.628 [2024-05-15 11:12:57.141613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.628 [2024-05-15 11:12:57.141690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.628 [2024-05-15 11:12:57.141706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.628 [2024-05-15 11:12:57.141714] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.628 [2024-05-15 11:12:57.141721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f30350 00:27:00.628 [2024-05-15 11:12:57.141735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.628 qpair failed and we were unable to recover it. 00:27:00.628 [2024-05-15 11:12:57.142045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f26ec0 (9): Bad file descriptor 00:27:00.628 Initializing NVMe Controllers 00:27:00.628 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:00.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:00.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:00.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:00.629 Initialization complete. Launching workers. 00:27:00.629 Starting thread on core 1 00:27:00.629 Starting thread on core 2 00:27:00.629 Starting thread on core 3 00:27:00.629 Starting thread on core 0 00:27:00.629 11:12:57 -- host/target_disconnect.sh@59 -- # sync 00:27:00.629 00:27:00.629 real 0m11.410s 00:27:00.629 user 0m21.239s 00:27:00.629 sys 0m3.470s 00:27:00.629 11:12:57 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:00.629 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:27:00.629 ************************************ 00:27:00.629 END TEST nvmf_target_disconnect_tc2 00:27:00.629 ************************************ 00:27:00.629 11:12:57 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:27:00.629 11:12:57 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:00.629 11:12:57 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:27:00.629 11:12:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:00.629 11:12:57 -- nvmf/common.sh@117 -- # sync 00:27:00.629 11:12:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.629 11:12:57 -- nvmf/common.sh@120 -- # set +e 00:27:00.629 11:12:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.629 11:12:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.629 rmmod nvme_tcp 00:27:00.629 rmmod nvme_fabrics 00:27:00.629 rmmod nvme_keyring 00:27:00.629 11:12:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.629 11:12:57 -- nvmf/common.sh@124 -- # set -e 00:27:00.629 11:12:57 -- nvmf/common.sh@125 -- # return 0 00:27:00.629 11:12:57 -- nvmf/common.sh@478 -- # '[' -n 509904 ']' 00:27:00.629 11:12:57 -- nvmf/common.sh@479 -- # killprocess 509904 00:27:00.629 11:12:57 -- common/autotest_common.sh@946 -- # '[' -z 509904 ']' 00:27:00.629 11:12:57 -- common/autotest_common.sh@950 -- # kill -0 509904 00:27:00.629 11:12:57 -- common/autotest_common.sh@951 -- # uname 00:27:00.629 11:12:57 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:00.889 11:12:57 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 509904 00:27:00.889 11:12:57 -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:27:00.889 11:12:57 -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:27:00.889 11:12:57 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 509904' 00:27:00.889 killing process with pid 509904 00:27:00.889 11:12:57 -- common/autotest_common.sh@965 -- # kill 509904 00:27:00.889 [2024-05-15 11:12:57.321538] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:00.889 11:12:57 -- common/autotest_common.sh@970 -- # wait 509904 00:27:00.889 11:12:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:00.889 11:12:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:00.889 11:12:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:00.889 11:12:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.889 11:12:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.889 11:12:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.889 11:12:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.889 11:12:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.432 11:12:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.432 00:27:03.432 real 0m21.128s 00:27:03.432 user 0m49.064s 00:27:03.432 sys 0m9.042s 00:27:03.432 11:12:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:03.432 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:27:03.432 ************************************ 00:27:03.432 END TEST nvmf_target_disconnect 00:27:03.432 ************************************ 00:27:03.432 11:12:59 -- nvmf/nvmf.sh@124 -- # timing_exit host 00:27:03.432 11:12:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:03.432 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:27:03.432 11:12:59 -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:27:03.432 00:27:03.433 real 20m15.387s 00:27:03.433 user 42m48.746s 00:27:03.433 sys 6m31.479s 00:27:03.433 11:12:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:03.433 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:27:03.433 ************************************ 00:27:03.433 END TEST nvmf_tcp 00:27:03.433 ************************************ 00:27:03.433 11:12:59 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:27:03.433 11:12:59 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:03.433 11:12:59 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:03.433 11:12:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:03.433 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:27:03.433 ************************************ 00:27:03.433 START TEST spdkcli_nvmf_tcp 00:27:03.433 ************************************ 00:27:03.433 11:12:59 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:03.433 * Looking for test storage... 00:27:03.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:03.433 11:12:59 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:03.433 11:12:59 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:03.433 11:12:59 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:03.433 11:12:59 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.433 11:12:59 -- nvmf/common.sh@7 -- # uname -s 00:27:03.433 11:12:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.433 11:12:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.433 11:12:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.433 11:12:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.433 11:12:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.433 11:12:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.433 11:12:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.433 11:12:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.433 11:12:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.433 11:12:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.433 11:12:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.433 11:12:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.433 11:12:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.433 11:12:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.433 11:12:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.433 11:12:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.433 11:12:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.433 11:12:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.433 11:12:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.433 11:12:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.433 11:12:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.433 11:12:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.433 11:12:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.433 11:12:59 -- paths/export.sh@5 -- # export PATH 00:27:03.433 11:12:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.433 11:12:59 -- nvmf/common.sh@47 -- # : 0 00:27:03.433 11:12:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.433 11:12:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.433 11:12:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.433 11:12:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.433 11:12:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.433 11:12:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.433 11:12:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.433 11:12:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.433 11:12:59 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:03.433 11:12:59 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:03.433 11:12:59 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:03.433 11:12:59 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:03.433 11:12:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:03.433 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:27:03.433 11:12:59 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:03.433 11:12:59 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=511727 00:27:03.433 11:12:59 -- spdkcli/common.sh@34 -- # waitforlisten 511727 00:27:03.433 11:12:59 -- common/autotest_common.sh@827 -- # '[' -z 511727 ']' 00:27:03.433 11:12:59 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:03.433 11:12:59 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.433 11:12:59 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:03.433 11:12:59 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.433 11:12:59 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:03.433 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:27:03.433 [2024-05-15 11:12:59.857453] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:27:03.433 [2024-05-15 11:12:59.857522] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511727 ] 00:27:03.433 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.433 [2024-05-15 11:12:59.921002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:03.433 [2024-05-15 11:12:59.995258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.433 [2024-05-15 11:12:59.995259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.003 11:13:00 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:04.003 11:13:00 -- common/autotest_common.sh@860 -- # return 0 00:27:04.003 11:13:00 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:04.003 11:13:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.003 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:27:04.262 11:13:00 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:04.262 11:13:00 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:04.262 11:13:00 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:04.262 11:13:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:04.262 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:27:04.262 11:13:00 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:04.262 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:04.262 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:04.262 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:04.262 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:04.262 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:04.262 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:04.262 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.262 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.262 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:04.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:04.262 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:04.262 ' 00:27:06.800 [2024-05-15 11:13:03.287836] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.178 [2024-05-15 11:13:04.583749] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:08.178 [2024-05-15 11:13:04.584106] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:10.716 [2024-05-15 11:13:06.991234] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:12.623 [2024-05-15 11:13:09.069585] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:14.002 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:14.002 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:14.003 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:14.003 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:14.003 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:14.003 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:14.003 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:14.003 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.003 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.003 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:14.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:14.003 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:14.262 11:13:10 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:14.262 11:13:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.262 11:13:10 -- common/autotest_common.sh@10 -- # set +x 00:27:14.262 11:13:10 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:14.262 11:13:10 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:14.262 11:13:10 -- common/autotest_common.sh@10 -- # set +x 00:27:14.262 11:13:10 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:14.262 11:13:10 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:14.522 11:13:11 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:14.784 11:13:11 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:14.784 11:13:11 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:14.784 11:13:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.784 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:27:14.784 11:13:11 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:14.784 11:13:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:14.784 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:27:14.784 11:13:11 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:14.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:14.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:14.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:14.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:14.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:14.784 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:14.784 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:14.784 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:14.784 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:14.784 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:14.784 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:14.784 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:14.784 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:14.784 ' 00:27:20.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:20.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:20.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:20.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:20.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:20.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:20.069 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:20.069 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:20.069 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:20.069 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:20.069 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:20.069 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:20.069 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:20.069 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:20.069 11:13:16 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:20.069 11:13:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.069 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 11:13:16 -- spdkcli/nvmf.sh@90 -- # killprocess 511727 00:27:20.069 11:13:16 -- common/autotest_common.sh@946 -- # '[' -z 511727 ']' 00:27:20.069 11:13:16 -- common/autotest_common.sh@950 -- # kill -0 511727 00:27:20.069 11:13:16 -- common/autotest_common.sh@951 -- # uname 00:27:20.069 11:13:16 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:20.069 11:13:16 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 511727 00:27:20.069 11:13:16 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:20.069 11:13:16 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:20.069 11:13:16 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 511727' 00:27:20.069 killing process with pid 511727 00:27:20.069 11:13:16 -- common/autotest_common.sh@965 -- # kill 511727 00:27:20.069 [2024-05-15 11:13:16.201147] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:20.069 11:13:16 -- common/autotest_common.sh@970 -- # wait 511727 00:27:20.069 11:13:16 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:20.069 11:13:16 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:20.069 11:13:16 -- spdkcli/common.sh@13 -- # '[' -n 511727 ']' 00:27:20.069 11:13:16 -- spdkcli/common.sh@14 -- # killprocess 511727 00:27:20.069 11:13:16 -- common/autotest_common.sh@946 -- # '[' -z 511727 ']' 00:27:20.069 11:13:16 -- common/autotest_common.sh@950 -- # kill -0 511727 00:27:20.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (511727) - No such process 00:27:20.069 11:13:16 -- common/autotest_common.sh@973 -- # echo 'Process with pid 511727 is not found' 00:27:20.069 Process with pid 511727 is not found 00:27:20.069 11:13:16 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:20.069 11:13:16 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:20.069 11:13:16 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:20.069 00:27:20.069 real 0m16.665s 00:27:20.069 user 0m35.728s 00:27:20.069 sys 0m0.878s 00:27:20.069 11:13:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:20.069 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 ************************************ 00:27:20.069 END TEST spdkcli_nvmf_tcp 00:27:20.069 ************************************ 00:27:20.069 11:13:16 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:20.069 11:13:16 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:20.069 11:13:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:20.069 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 ************************************ 00:27:20.069 START TEST nvmf_identify_passthru 00:27:20.069 ************************************ 00:27:20.069 11:13:16 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:20.069 * Looking for test storage... 00:27:20.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.069 11:13:16 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.069 11:13:16 -- nvmf/common.sh@7 -- # uname -s 00:27:20.069 11:13:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.069 11:13:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.069 11:13:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.069 11:13:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.069 11:13:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.069 11:13:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.069 11:13:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.069 11:13:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.069 11:13:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.069 11:13:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.069 11:13:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.069 11:13:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.069 11:13:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.069 11:13:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.069 11:13:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.069 11:13:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.069 11:13:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.069 11:13:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.069 11:13:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.069 11:13:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.069 11:13:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- paths/export.sh@5 -- # export PATH 00:27:20.070 11:13:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- nvmf/common.sh@47 -- # : 0 00:27:20.070 11:13:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.070 11:13:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.070 11:13:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.070 11:13:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.070 11:13:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.070 11:13:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.070 11:13:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.070 11:13:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.070 11:13:16 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.070 11:13:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.070 11:13:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.070 11:13:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.070 11:13:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- paths/export.sh@5 -- # export PATH 00:27:20.070 11:13:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.070 11:13:16 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:20.070 11:13:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:20.070 11:13:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.070 11:13:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:20.070 11:13:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:20.070 11:13:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:20.070 11:13:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.070 11:13:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:20.070 11:13:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.070 11:13:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:20.070 11:13:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:20.070 11:13:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.070 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:27:28.211 11:13:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:28.211 11:13:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.211 11:13:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.211 11:13:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.211 11:13:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.211 11:13:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.211 11:13:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.211 11:13:23 -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.211 11:13:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.211 11:13:23 -- nvmf/common.sh@296 -- # e810=() 00:27:28.211 11:13:23 -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.211 11:13:23 -- nvmf/common.sh@297 -- # x722=() 00:27:28.211 11:13:23 -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.211 11:13:23 -- nvmf/common.sh@298 -- # mlx=() 00:27:28.211 11:13:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.211 11:13:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.211 11:13:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.211 11:13:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.211 11:13:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.211 11:13:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.211 11:13:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:28.211 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:28.211 11:13:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.211 11:13:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:28.211 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:28.211 11:13:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.211 11:13:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.211 11:13:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.212 11:13:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.212 11:13:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.212 11:13:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:28.212 11:13:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.212 11:13:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:28.212 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:28.212 11:13:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.212 11:13:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.212 11:13:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.212 11:13:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:28.212 11:13:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.212 11:13:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:28.212 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:28.212 11:13:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.212 11:13:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:28.212 11:13:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:28.212 11:13:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:28.212 11:13:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:28.212 11:13:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:28.212 11:13:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.212 11:13:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.212 11:13:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.212 11:13:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.212 11:13:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.212 11:13:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.212 11:13:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.212 11:13:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.212 11:13:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.212 11:13:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.212 11:13:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.212 11:13:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.212 11:13:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.212 11:13:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.212 11:13:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.212 11:13:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.212 11:13:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.212 11:13:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.212 11:13:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.212 11:13:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:27:28.212 00:27:28.212 --- 10.0.0.2 ping statistics --- 00:27:28.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.212 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:27:28.212 11:13:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:27:28.212 00:27:28.212 --- 10.0.0.1 ping statistics --- 00:27:28.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.212 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:27:28.212 11:13:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.212 11:13:23 -- nvmf/common.sh@411 -- # return 0 00:27:28.212 11:13:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:28.212 11:13:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.212 11:13:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:28.212 11:13:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:28.212 11:13:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.212 11:13:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:28.212 11:13:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:28.212 11:13:23 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:28.212 11:13:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:28.212 11:13:23 -- common/autotest_common.sh@10 -- # set +x 00:27:28.212 11:13:23 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:28.212 11:13:23 -- common/autotest_common.sh@1520 -- # bdfs=() 00:27:28.212 11:13:23 -- common/autotest_common.sh@1520 -- # local bdfs 00:27:28.212 11:13:23 -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:27:28.212 11:13:23 -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:27:28.212 11:13:23 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:28.212 11:13:23 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:28.212 11:13:23 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:28.212 11:13:23 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:28.212 11:13:23 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:27:28.212 11:13:23 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:27:28.212 11:13:23 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:27:28.212 11:13:23 -- common/autotest_common.sh@1523 -- # echo 0000:65:00.0 00:27:28.212 11:13:23 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:27:28.212 11:13:23 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:27:28.212 11:13:23 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:28.212 11:13:23 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:27:28.212 11:13:23 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:28.212 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.212 11:13:24 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:27:28.212 11:13:24 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:27:28.212 11:13:24 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:28.212 11:13:24 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:28.212 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.212 11:13:24 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:27:28.212 11:13:24 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:28.212 11:13:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.212 11:13:24 -- common/autotest_common.sh@10 -- # set +x 00:27:28.212 11:13:24 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:28.212 11:13:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:28.212 11:13:24 -- common/autotest_common.sh@10 -- # set +x 00:27:28.212 11:13:24 -- target/identify_passthru.sh@31 -- # nvmfpid=518846 00:27:28.212 11:13:24 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.212 11:13:24 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:28.212 11:13:24 -- target/identify_passthru.sh@35 -- # waitforlisten 518846 00:27:28.212 11:13:24 -- common/autotest_common.sh@827 -- # '[' -z 518846 ']' 00:27:28.212 11:13:24 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.212 11:13:24 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:28.212 11:13:24 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.212 11:13:24 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:28.212 11:13:24 -- common/autotest_common.sh@10 -- # set +x 00:27:28.212 [2024-05-15 11:13:24.839359] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:27:28.212 [2024-05-15 11:13:24.839418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.472 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.472 [2024-05-15 11:13:24.905265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.472 [2024-05-15 11:13:24.975175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.472 [2024-05-15 11:13:24.975210] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.472 [2024-05-15 11:13:24.975218] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.472 [2024-05-15 11:13:24.975224] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.472 [2024-05-15 11:13:24.975230] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.472 [2024-05-15 11:13:24.975363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.472 [2024-05-15 11:13:24.975491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.472 [2024-05-15 11:13:24.975651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.472 [2024-05-15 11:13:24.975652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.040 11:13:25 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.040 11:13:25 -- common/autotest_common.sh@860 -- # return 0 00:27:29.040 11:13:25 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:29.040 11:13:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.040 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:27:29.040 INFO: Log level set to 20 00:27:29.040 INFO: Requests: 00:27:29.040 { 00:27:29.040 "jsonrpc": "2.0", 00:27:29.040 "method": "nvmf_set_config", 00:27:29.040 "id": 1, 00:27:29.040 "params": { 00:27:29.040 "admin_cmd_passthru": { 00:27:29.040 "identify_ctrlr": true 00:27:29.040 } 00:27:29.040 } 00:27:29.040 } 00:27:29.040 00:27:29.040 INFO: response: 00:27:29.040 { 00:27:29.040 "jsonrpc": "2.0", 00:27:29.040 "id": 1, 00:27:29.040 "result": true 00:27:29.040 } 00:27:29.040 00:27:29.040 11:13:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.040 11:13:25 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:29.040 11:13:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.040 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:27:29.040 INFO: Setting log level to 20 00:27:29.040 INFO: Setting log level to 20 00:27:29.040 INFO: Log level set to 20 00:27:29.040 INFO: Log level set to 20 00:27:29.040 INFO: Requests: 00:27:29.040 { 00:27:29.040 "jsonrpc": "2.0", 00:27:29.040 "method": "framework_start_init", 00:27:29.040 "id": 1 00:27:29.040 } 00:27:29.040 00:27:29.040 INFO: Requests: 00:27:29.040 { 00:27:29.040 "jsonrpc": "2.0", 00:27:29.040 "method": "framework_start_init", 00:27:29.040 "id": 1 00:27:29.040 } 00:27:29.040 00:27:29.040 [2024-05-15 11:13:25.682961] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:29.040 INFO: response: 00:27:29.040 { 00:27:29.040 "jsonrpc": "2.0", 00:27:29.040 "id": 1, 00:27:29.040 "result": true 00:27:29.040 } 00:27:29.040 00:27:29.040 INFO: response: 00:27:29.040 { 00:27:29.040 "jsonrpc": "2.0", 00:27:29.040 "id": 1, 00:27:29.040 "result": true 00:27:29.040 } 00:27:29.040 00:27:29.040 11:13:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.040 11:13:25 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.040 11:13:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.040 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:27:29.299 INFO: Setting log level to 40 00:27:29.299 INFO: Setting log level to 40 00:27:29.299 INFO: Setting log level to 40 00:27:29.299 [2024-05-15 11:13:25.696212] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.299 11:13:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.299 11:13:25 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:29.299 11:13:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.299 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:27:29.299 11:13:25 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:27:29.299 11:13:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.299 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:27:29.557 Nvme0n1 00:27:29.557 11:13:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.557 11:13:26 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:29.557 11:13:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.557 11:13:26 -- common/autotest_common.sh@10 -- # set +x 00:27:29.557 11:13:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.557 11:13:26 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:29.557 11:13:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.557 11:13:26 -- common/autotest_common.sh@10 -- # set +x 00:27:29.557 11:13:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.557 11:13:26 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.557 11:13:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.558 11:13:26 -- common/autotest_common.sh@10 -- # set +x 00:27:29.558 [2024-05-15 11:13:26.076585] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:29.558 [2024-05-15 11:13:26.076841] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.558 11:13:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.558 11:13:26 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:29.558 11:13:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.558 11:13:26 -- common/autotest_common.sh@10 -- # set +x 00:27:29.558 [ 00:27:29.558 { 00:27:29.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:29.558 "subtype": "Discovery", 00:27:29.558 "listen_addresses": [], 00:27:29.558 "allow_any_host": true, 00:27:29.558 "hosts": [] 00:27:29.558 }, 00:27:29.558 { 00:27:29.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.558 "subtype": "NVMe", 00:27:29.558 "listen_addresses": [ 00:27:29.558 { 00:27:29.558 "trtype": "TCP", 00:27:29.558 "adrfam": "IPv4", 00:27:29.558 "traddr": "10.0.0.2", 00:27:29.558 "trsvcid": "4420" 00:27:29.558 } 00:27:29.558 ], 00:27:29.558 "allow_any_host": true, 00:27:29.558 "hosts": [], 00:27:29.558 "serial_number": "SPDK00000000000001", 00:27:29.558 "model_number": "SPDK bdev Controller", 00:27:29.558 "max_namespaces": 1, 00:27:29.558 "min_cntlid": 1, 00:27:29.558 "max_cntlid": 65519, 00:27:29.558 "namespaces": [ 00:27:29.558 { 00:27:29.558 "nsid": 1, 00:27:29.558 "bdev_name": "Nvme0n1", 00:27:29.558 "name": "Nvme0n1", 00:27:29.558 "nguid": "3634473052605487002538450000003C", 00:27:29.558 "uuid": "36344730-5260-5487-0025-38450000003c" 00:27:29.558 } 00:27:29.558 ] 00:27:29.558 } 00:27:29.558 ] 00:27:29.558 11:13:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.558 11:13:26 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:29.558 11:13:26 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:29.558 11:13:26 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:29.558 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.817 11:13:26 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:27:29.817 11:13:26 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:29.817 11:13:26 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:29.817 11:13:26 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:29.817 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.076 11:13:26 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:27:30.076 11:13:26 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:27:30.076 11:13:26 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:27:30.076 11:13:26 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.076 11:13:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.076 11:13:26 -- common/autotest_common.sh@10 -- # set +x 00:27:30.076 11:13:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.076 11:13:26 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:30.076 11:13:26 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:30.076 11:13:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:30.076 11:13:26 -- nvmf/common.sh@117 -- # sync 00:27:30.076 11:13:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.076 11:13:26 -- nvmf/common.sh@120 -- # set +e 00:27:30.076 11:13:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.076 11:13:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.076 rmmod nvme_tcp 00:27:30.076 rmmod nvme_fabrics 00:27:30.076 rmmod nvme_keyring 00:27:30.076 11:13:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.076 11:13:26 -- nvmf/common.sh@124 -- # set -e 00:27:30.076 11:13:26 -- nvmf/common.sh@125 -- # return 0 00:27:30.076 11:13:26 -- nvmf/common.sh@478 -- # '[' -n 518846 ']' 00:27:30.076 11:13:26 -- nvmf/common.sh@479 -- # killprocess 518846 00:27:30.076 11:13:26 -- common/autotest_common.sh@946 -- # '[' -z 518846 ']' 00:27:30.076 11:13:26 -- common/autotest_common.sh@950 -- # kill -0 518846 00:27:30.076 11:13:26 -- common/autotest_common.sh@951 -- # uname 00:27:30.076 11:13:26 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:30.076 11:13:26 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 518846 00:27:30.077 11:13:26 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:30.077 11:13:26 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:30.077 11:13:26 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 518846' 00:27:30.077 killing process with pid 518846 00:27:30.077 11:13:26 -- common/autotest_common.sh@965 -- # kill 518846 00:27:30.077 [2024-05-15 11:13:26.621029] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:30.077 11:13:26 -- common/autotest_common.sh@970 -- # wait 518846 00:27:30.337 11:13:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:30.337 11:13:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:30.337 11:13:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:30.337 11:13:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.337 11:13:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.337 11:13:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.337 11:13:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:30.337 11:13:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.878 11:13:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.878 00:27:32.878 real 0m12.541s 00:27:32.878 user 0m9.942s 00:27:32.878 sys 0m5.979s 00:27:32.878 11:13:28 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:32.878 11:13:28 -- common/autotest_common.sh@10 -- # set +x 00:27:32.878 ************************************ 00:27:32.878 END TEST nvmf_identify_passthru 00:27:32.878 ************************************ 00:27:32.878 11:13:28 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:32.878 11:13:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:32.878 11:13:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:32.878 11:13:28 -- common/autotest_common.sh@10 -- # set +x 00:27:32.878 ************************************ 00:27:32.878 START TEST nvmf_dif 00:27:32.878 ************************************ 00:27:32.878 11:13:29 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:32.878 * Looking for test storage... 00:27:32.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:32.878 11:13:29 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.878 11:13:29 -- nvmf/common.sh@7 -- # uname -s 00:27:32.878 11:13:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.878 11:13:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.878 11:13:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.878 11:13:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.878 11:13:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.878 11:13:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.878 11:13:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.878 11:13:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.878 11:13:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.878 11:13:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.878 11:13:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.878 11:13:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.878 11:13:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.878 11:13:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.878 11:13:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.878 11:13:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.878 11:13:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.878 11:13:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.878 11:13:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.878 11:13:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.878 11:13:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.878 11:13:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.878 11:13:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.878 11:13:29 -- paths/export.sh@5 -- # export PATH 00:27:32.878 11:13:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.878 11:13:29 -- nvmf/common.sh@47 -- # : 0 00:27:32.878 11:13:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.878 11:13:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.878 11:13:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.878 11:13:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.878 11:13:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.878 11:13:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.878 11:13:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.878 11:13:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.878 11:13:29 -- target/dif.sh@15 -- # NULL_META=16 00:27:32.878 11:13:29 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:32.878 11:13:29 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:32.878 11:13:29 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:32.878 11:13:29 -- target/dif.sh@135 -- # nvmftestinit 00:27:32.878 11:13:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:32.878 11:13:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.878 11:13:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:32.878 11:13:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:32.878 11:13:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:32.878 11:13:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.878 11:13:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:32.878 11:13:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.878 11:13:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:32.878 11:13:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:32.878 11:13:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.878 11:13:29 -- common/autotest_common.sh@10 -- # set +x 00:27:39.453 11:13:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:39.453 11:13:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.453 11:13:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.453 11:13:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.453 11:13:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.453 11:13:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.453 11:13:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.453 11:13:35 -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.453 11:13:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.453 11:13:35 -- nvmf/common.sh@296 -- # e810=() 00:27:39.453 11:13:35 -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.453 11:13:35 -- nvmf/common.sh@297 -- # x722=() 00:27:39.453 11:13:35 -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.453 11:13:35 -- nvmf/common.sh@298 -- # mlx=() 00:27:39.453 11:13:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.453 11:13:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.453 11:13:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.453 11:13:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.453 11:13:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.453 11:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.453 11:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:39.453 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:39.453 11:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.453 11:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:39.453 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:39.453 11:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.453 11:13:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.453 11:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.453 11:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:39.453 11:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.453 11:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:39.453 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:39.453 11:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.453 11:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.453 11:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.453 11:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:39.453 11:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.453 11:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:39.453 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:39.453 11:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.453 11:13:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:39.453 11:13:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:39.453 11:13:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:39.453 11:13:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:39.453 11:13:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.453 11:13:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.453 11:13:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.453 11:13:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:39.453 11:13:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.453 11:13:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.453 11:13:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:39.453 11:13:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.453 11:13:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.453 11:13:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:39.453 11:13:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:39.453 11:13:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.453 11:13:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.453 11:13:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.453 11:13:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.453 11:13:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:39.453 11:13:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.453 11:13:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.453 11:13:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.453 11:13:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:27:39.453 00:27:39.453 --- 10.0.0.2 ping statistics --- 00:27:39.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.453 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:27:39.453 11:13:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:27:39.453 00:27:39.453 --- 10.0.0.1 ping statistics --- 00:27:39.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.453 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:27:39.453 11:13:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.453 11:13:35 -- nvmf/common.sh@411 -- # return 0 00:27:39.453 11:13:35 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:39.453 11:13:35 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:42.751 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:42.751 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:42.751 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:43.012 11:13:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.012 11:13:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:43.012 11:13:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:43.012 11:13:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.012 11:13:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:43.012 11:13:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:43.012 11:13:39 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:43.012 11:13:39 -- target/dif.sh@137 -- # nvmfappstart 00:27:43.012 11:13:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:43.012 11:13:39 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:43.012 11:13:39 -- common/autotest_common.sh@10 -- # set +x 00:27:43.012 11:13:39 -- nvmf/common.sh@470 -- # nvmfpid=524882 00:27:43.012 11:13:39 -- nvmf/common.sh@471 -- # waitforlisten 524882 00:27:43.012 11:13:39 -- common/autotest_common.sh@827 -- # '[' -z 524882 ']' 00:27:43.012 11:13:39 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.012 11:13:39 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.012 11:13:39 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.012 11:13:39 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.012 11:13:39 -- common/autotest_common.sh@10 -- # set +x 00:27:43.012 11:13:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:43.012 [2024-05-15 11:13:39.595460] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:27:43.012 [2024-05-15 11:13:39.595521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.012 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.272 [2024-05-15 11:13:39.665091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.272 [2024-05-15 11:13:39.739012] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.272 [2024-05-15 11:13:39.739049] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.272 [2024-05-15 11:13:39.739057] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.272 [2024-05-15 11:13:39.739063] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.272 [2024-05-15 11:13:39.739068] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.272 [2024-05-15 11:13:39.739091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.843 11:13:40 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:43.843 11:13:40 -- common/autotest_common.sh@860 -- # return 0 00:27:43.843 11:13:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:43.843 11:13:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.843 11:13:40 -- common/autotest_common.sh@10 -- # set +x 00:27:43.843 11:13:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.843 11:13:40 -- target/dif.sh@139 -- # create_transport 00:27:43.843 11:13:40 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:43.843 11:13:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.843 11:13:40 -- common/autotest_common.sh@10 -- # set +x 00:27:43.843 [2024-05-15 11:13:40.402396] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.843 11:13:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.843 11:13:40 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:43.843 11:13:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:43.843 11:13:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.843 11:13:40 -- common/autotest_common.sh@10 -- # set +x 00:27:43.843 ************************************ 00:27:43.843 START TEST fio_dif_1_default 00:27:43.843 ************************************ 00:27:43.843 11:13:40 -- common/autotest_common.sh@1121 -- # fio_dif_1 00:27:43.843 11:13:40 -- target/dif.sh@86 -- # create_subsystems 0 00:27:43.843 11:13:40 -- target/dif.sh@28 -- # local sub 00:27:43.843 11:13:40 -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.843 11:13:40 -- target/dif.sh@31 -- # create_subsystem 0 00:27:43.843 11:13:40 -- target/dif.sh@18 -- # local sub_id=0 00:27:43.843 11:13:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:43.843 11:13:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.843 11:13:40 -- common/autotest_common.sh@10 -- # set +x 00:27:43.843 bdev_null0 00:27:43.843 11:13:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.843 11:13:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:43.843 11:13:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.843 11:13:40 -- common/autotest_common.sh@10 -- # set +x 00:27:43.843 11:13:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.843 11:13:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:43.843 11:13:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.843 11:13:40 -- common/autotest_common.sh@10 -- # set +x 00:27:43.843 11:13:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.843 11:13:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:43.843 11:13:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.843 11:13:40 -- common/autotest_common.sh@10 -- # set +x 00:27:43.843 [2024-05-15 11:13:40.482554] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:43.843 [2024-05-15 11:13:40.482753] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.843 11:13:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.843 11:13:40 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:43.843 11:13:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.843 11:13:40 -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.843 11:13:40 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:43.843 11:13:40 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:43.843 11:13:40 -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:43.843 11:13:40 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.843 11:13:40 -- common/autotest_common.sh@1337 -- # shift 00:27:43.843 11:13:40 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:43.843 11:13:40 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.843 11:13:40 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:43.843 11:13:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:43.843 11:13:40 -- target/dif.sh@82 -- # gen_fio_conf 00:27:43.843 11:13:40 -- nvmf/common.sh@521 -- # config=() 00:27:43.843 11:13:40 -- target/dif.sh@54 -- # local file 00:27:43.843 11:13:40 -- nvmf/common.sh@521 -- # local subsystem config 00:27:43.843 11:13:40 -- target/dif.sh@56 -- # cat 00:27:43.843 11:13:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:43.843 11:13:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:43.843 { 00:27:43.843 "params": { 00:27:43.843 "name": "Nvme$subsystem", 00:27:43.843 "trtype": "$TEST_TRANSPORT", 00:27:43.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.843 "adrfam": "ipv4", 00:27:43.843 "trsvcid": "$NVMF_PORT", 00:27:43.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.843 "hdgst": ${hdgst:-false}, 00:27:43.843 "ddgst": ${ddgst:-false} 00:27:43.843 }, 00:27:43.843 "method": "bdev_nvme_attach_controller" 00:27:43.843 } 00:27:43.843 EOF 00:27:43.843 )") 00:27:43.843 11:13:40 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.843 11:13:40 -- common/autotest_common.sh@1341 -- # grep libasan 00:27:43.843 11:13:40 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:43.843 11:13:40 -- nvmf/common.sh@543 -- # cat 00:27:43.843 11:13:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:43.843 11:13:40 -- target/dif.sh@72 -- # (( file <= files )) 00:27:44.104 11:13:40 -- nvmf/common.sh@545 -- # jq . 00:27:44.104 11:13:40 -- nvmf/common.sh@546 -- # IFS=, 00:27:44.104 11:13:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:44.104 "params": { 00:27:44.104 "name": "Nvme0", 00:27:44.104 "trtype": "tcp", 00:27:44.104 "traddr": "10.0.0.2", 00:27:44.104 "adrfam": "ipv4", 00:27:44.104 "trsvcid": "4420", 00:27:44.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.104 "hdgst": false, 00:27:44.104 "ddgst": false 00:27:44.104 }, 00:27:44.104 "method": "bdev_nvme_attach_controller" 00:27:44.104 }' 00:27:44.104 11:13:40 -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:44.104 11:13:40 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:44.104 11:13:40 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.104 11:13:40 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:44.104 11:13:40 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:44.104 11:13:40 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:44.104 11:13:40 -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:44.104 11:13:40 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:44.104 11:13:40 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:44.104 11:13:40 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.366 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:44.366 fio-3.35 00:27:44.366 Starting 1 thread 00:27:44.366 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.580 00:27:56.580 filename0: (groupid=0, jobs=1): err= 0: pid=525382: Wed May 15 11:13:51 2024 00:27:56.580 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:27:56.580 slat (nsec): min=5622, max=34016, avg=6419.47, stdev=1710.40 00:27:56.580 clat (usec): min=40832, max=42992, avg=41068.47, stdev=351.76 00:27:56.580 lat (usec): min=40838, max=42998, avg=41074.88, stdev=352.07 00:27:56.580 clat percentiles (usec): 00:27:56.580 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:56.580 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:56.580 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:27:56.580 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:27:56.580 | 99.99th=[43254] 00:27:56.580 bw ( KiB/s): min= 384, max= 416, per=99.63%, avg=388.80, stdev=11.72, samples=20 00:27:56.580 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:56.580 lat (msec) : 50=100.00% 00:27:56.580 cpu : usr=95.45%, sys=4.37%, ctx=13, majf=0, minf=223 00:27:56.580 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.580 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.580 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:56.580 00:27:56.580 Run status group 0 (all jobs): 00:27:56.580 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10025-10025msec 00:27:56.580 11:13:51 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:56.580 11:13:51 -- target/dif.sh@43 -- # local sub 00:27:56.580 11:13:51 -- target/dif.sh@45 -- # for sub in "$@" 00:27:56.580 11:13:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:56.580 11:13:51 -- target/dif.sh@36 -- # local sub_id=0 00:27:56.580 11:13:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 00:27:56.580 real 0m11.099s 00:27:56.580 user 0m25.561s 00:27:56.580 sys 0m0.738s 00:27:56.580 11:13:51 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 ************************************ 00:27:56.580 END TEST fio_dif_1_default 00:27:56.580 ************************************ 00:27:56.580 11:13:51 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:56.580 11:13:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:56.580 11:13:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 ************************************ 00:27:56.580 START TEST fio_dif_1_multi_subsystems 00:27:56.580 ************************************ 00:27:56.580 11:13:51 -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:27:56.580 11:13:51 -- target/dif.sh@92 -- # local files=1 00:27:56.580 11:13:51 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:56.580 11:13:51 -- target/dif.sh@28 -- # local sub 00:27:56.580 11:13:51 -- target/dif.sh@30 -- # for sub in "$@" 00:27:56.580 11:13:51 -- target/dif.sh@31 -- # create_subsystem 0 00:27:56.580 11:13:51 -- target/dif.sh@18 -- # local sub_id=0 00:27:56.580 11:13:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 bdev_null0 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 [2024-05-15 11:13:51.675503] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@30 -- # for sub in "$@" 00:27:56.580 11:13:51 -- target/dif.sh@31 -- # create_subsystem 1 00:27:56.580 11:13:51 -- target/dif.sh@18 -- # local sub_id=1 00:27:56.580 11:13:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 bdev_null1 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.580 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.580 11:13:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.580 11:13:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.580 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:27:56.581 11:13:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.581 11:13:51 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:56.581 11:13:51 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:56.581 11:13:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:56.581 11:13:51 -- nvmf/common.sh@521 -- # config=() 00:27:56.581 11:13:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.581 11:13:51 -- nvmf/common.sh@521 -- # local subsystem config 00:27:56.581 11:13:51 -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.581 11:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:56.581 11:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:56.581 { 00:27:56.581 "params": { 00:27:56.581 "name": "Nvme$subsystem", 00:27:56.581 "trtype": "$TEST_TRANSPORT", 00:27:56.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.581 "adrfam": "ipv4", 00:27:56.581 "trsvcid": "$NVMF_PORT", 00:27:56.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.581 "hdgst": ${hdgst:-false}, 00:27:56.581 "ddgst": ${ddgst:-false} 00:27:56.581 }, 00:27:56.581 "method": "bdev_nvme_attach_controller" 00:27:56.581 } 00:27:56.581 EOF 00:27:56.581 )") 00:27:56.581 11:13:51 -- target/dif.sh@82 -- # gen_fio_conf 00:27:56.581 11:13:51 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:56.581 11:13:51 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:56.581 11:13:51 -- target/dif.sh@54 -- # local file 00:27:56.581 11:13:51 -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:56.581 11:13:51 -- target/dif.sh@56 -- # cat 00:27:56.581 11:13:51 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.581 11:13:51 -- common/autotest_common.sh@1337 -- # shift 00:27:56.581 11:13:51 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:56.581 11:13:51 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:56.581 11:13:51 -- nvmf/common.sh@543 -- # cat 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.581 11:13:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # grep libasan 00:27:56.581 11:13:51 -- target/dif.sh@72 -- # (( file <= files )) 00:27:56.581 11:13:51 -- target/dif.sh@73 -- # cat 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:56.581 11:13:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:56.581 11:13:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:56.581 { 00:27:56.581 "params": { 00:27:56.581 "name": "Nvme$subsystem", 00:27:56.581 "trtype": "$TEST_TRANSPORT", 00:27:56.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.581 "adrfam": "ipv4", 00:27:56.581 "trsvcid": "$NVMF_PORT", 00:27:56.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.581 "hdgst": ${hdgst:-false}, 00:27:56.581 "ddgst": ${ddgst:-false} 00:27:56.581 }, 00:27:56.581 "method": "bdev_nvme_attach_controller" 00:27:56.581 } 00:27:56.581 EOF 00:27:56.581 )") 00:27:56.581 11:13:51 -- target/dif.sh@72 -- # (( file++ )) 00:27:56.581 11:13:51 -- target/dif.sh@72 -- # (( file <= files )) 00:27:56.581 11:13:51 -- nvmf/common.sh@543 -- # cat 00:27:56.581 11:13:51 -- nvmf/common.sh@545 -- # jq . 00:27:56.581 11:13:51 -- nvmf/common.sh@546 -- # IFS=, 00:27:56.581 11:13:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:56.581 "params": { 00:27:56.581 "name": "Nvme0", 00:27:56.581 "trtype": "tcp", 00:27:56.581 "traddr": "10.0.0.2", 00:27:56.581 "adrfam": "ipv4", 00:27:56.581 "trsvcid": "4420", 00:27:56.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:56.581 "hdgst": false, 00:27:56.581 "ddgst": false 00:27:56.581 }, 00:27:56.581 "method": "bdev_nvme_attach_controller" 00:27:56.581 },{ 00:27:56.581 "params": { 00:27:56.581 "name": "Nvme1", 00:27:56.581 "trtype": "tcp", 00:27:56.581 "traddr": "10.0.0.2", 00:27:56.581 "adrfam": "ipv4", 00:27:56.581 "trsvcid": "4420", 00:27:56.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:56.581 "hdgst": false, 00:27:56.581 "ddgst": false 00:27:56.581 }, 00:27:56.581 "method": "bdev_nvme_attach_controller" 00:27:56.581 }' 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:56.581 11:13:51 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:56.581 11:13:51 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:56.581 11:13:51 -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:56.581 11:13:51 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:56.581 11:13:51 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:56.581 11:13:51 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.581 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:56.581 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:56.581 fio-3.35 00:27:56.581 Starting 2 threads 00:27:56.581 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.572 00:28:06.572 filename0: (groupid=0, jobs=1): err= 0: pid=527690: Wed May 15 11:14:02 2024 00:28:06.572 read: IOPS=98, BW=396KiB/s (405kB/s)(3968KiB/10029msec) 00:28:06.572 slat (nsec): min=5604, max=32321, avg=6567.49, stdev=1567.93 00:28:06.572 clat (usec): min=906, max=42421, avg=40418.91, stdev=5064.86 00:28:06.572 lat (usec): min=912, max=42454, avg=40425.47, stdev=5064.95 00:28:06.572 clat percentiles (usec): 00:28:06.573 | 1.00th=[ 955], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:06.573 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:06.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:28:06.573 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:06.573 | 99.99th=[42206] 00:28:06.573 bw ( KiB/s): min= 384, max= 448, per=50.32%, avg=395.20, stdev=18.79, samples=20 00:28:06.573 iops : min= 96, max= 112, avg=98.80, stdev= 4.70, samples=20 00:28:06.573 lat (usec) : 1000=1.61% 00:28:06.573 lat (msec) : 50=98.39% 00:28:06.573 cpu : usr=97.20%, sys=2.60%, ctx=13, majf=0, minf=158 00:28:06.573 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.573 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.573 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:06.573 filename1: (groupid=0, jobs=1): err= 0: pid=527691: Wed May 15 11:14:02 2024 00:28:06.573 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10019msec) 00:28:06.573 slat (nsec): min=5607, max=32399, avg=6615.44, stdev=1491.49 00:28:06.573 clat (usec): min=40862, max=42995, avg=41040.86, stdev=293.78 00:28:06.573 lat (usec): min=40868, max=43001, avg=41047.47, stdev=293.90 00:28:06.573 clat percentiles (usec): 00:28:06.573 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:06.573 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:06.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:06.573 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:28:06.573 | 99.99th=[43254] 00:28:06.573 bw ( KiB/s): min= 384, max= 416, per=49.43%, avg=388.80, stdev=11.72, samples=20 00:28:06.573 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:06.573 lat (msec) : 50=100.00% 00:28:06.573 cpu : usr=97.09%, sys=2.72%, ctx=13, majf=0, minf=71 00:28:06.573 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.573 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.573 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:06.573 00:28:06.573 Run status group 0 (all jobs): 00:28:06.573 READ: bw=785KiB/s (804kB/s), 390KiB/s-396KiB/s (399kB/s-405kB/s), io=7872KiB (8061kB), run=10019-10029msec 00:28:06.573 11:14:02 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:06.573 11:14:02 -- target/dif.sh@43 -- # local sub 00:28:06.573 11:14:02 -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.573 11:14:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:06.573 11:14:02 -- target/dif.sh@36 -- # local sub_id=0 00:28:06.573 11:14:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.573 11:14:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 11:14:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 11:14:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:06.573 11:14:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 11:14:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 11:14:02 -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.573 11:14:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:06.573 11:14:02 -- target/dif.sh@36 -- # local sub_id=1 00:28:06.573 11:14:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.573 11:14:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 11:14:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 11:14:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:06.573 11:14:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 11:14:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 00:28:06.573 real 0m11.253s 00:28:06.573 user 0m32.410s 00:28:06.573 sys 0m0.831s 00:28:06.573 11:14:02 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 ************************************ 00:28:06.573 END TEST fio_dif_1_multi_subsystems 00:28:06.573 ************************************ 00:28:06.573 11:14:02 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:06.573 11:14:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:06.573 11:14:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 ************************************ 00:28:06.573 START TEST fio_dif_rand_params 00:28:06.573 ************************************ 00:28:06.573 11:14:02 -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:28:06.573 11:14:02 -- target/dif.sh@100 -- # local NULL_DIF 00:28:06.573 11:14:02 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:06.573 11:14:02 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:06.573 11:14:02 -- target/dif.sh@103 -- # bs=128k 00:28:06.573 11:14:02 -- target/dif.sh@103 -- # numjobs=3 00:28:06.573 11:14:02 -- target/dif.sh@103 -- # iodepth=3 00:28:06.573 11:14:02 -- target/dif.sh@103 -- # runtime=5 00:28:06.573 11:14:02 -- target/dif.sh@105 -- # create_subsystems 0 00:28:06.573 11:14:02 -- target/dif.sh@28 -- # local sub 00:28:06.573 11:14:02 -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.573 11:14:02 -- target/dif.sh@31 -- # create_subsystem 0 00:28:06.573 11:14:02 -- target/dif.sh@18 -- # local sub_id=0 00:28:06.573 11:14:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:06.573 11:14:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 bdev_null0 00:28:06.573 11:14:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 11:14:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:06.573 11:14:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 11:14:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 11:14:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:06.573 11:14:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 11:14:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 11:14:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.573 11:14:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.573 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 [2024-05-15 11:14:03.012992] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.573 11:14:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.573 11:14:03 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:06.573 11:14:03 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:06.573 11:14:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:06.574 11:14:03 -- nvmf/common.sh@521 -- # config=() 00:28:06.574 11:14:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.574 11:14:03 -- nvmf/common.sh@521 -- # local subsystem config 00:28:06.574 11:14:03 -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.574 11:14:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:06.574 11:14:03 -- target/dif.sh@82 -- # gen_fio_conf 00:28:06.574 11:14:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:06.574 { 00:28:06.574 "params": { 00:28:06.574 "name": "Nvme$subsystem", 00:28:06.574 "trtype": "$TEST_TRANSPORT", 00:28:06.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.574 "adrfam": "ipv4", 00:28:06.574 "trsvcid": "$NVMF_PORT", 00:28:06.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.574 "hdgst": ${hdgst:-false}, 00:28:06.574 "ddgst": ${ddgst:-false} 00:28:06.574 }, 00:28:06.574 "method": "bdev_nvme_attach_controller" 00:28:06.574 } 00:28:06.574 EOF 00:28:06.574 )") 00:28:06.574 11:14:03 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:06.574 11:14:03 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.574 11:14:03 -- target/dif.sh@54 -- # local file 00:28:06.574 11:14:03 -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:06.574 11:14:03 -- target/dif.sh@56 -- # cat 00:28:06.574 11:14:03 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.574 11:14:03 -- common/autotest_common.sh@1337 -- # shift 00:28:06.574 11:14:03 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:06.574 11:14:03 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.574 11:14:03 -- nvmf/common.sh@543 -- # cat 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.574 11:14:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # grep libasan 00:28:06.574 11:14:03 -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:06.574 11:14:03 -- nvmf/common.sh@545 -- # jq . 00:28:06.574 11:14:03 -- nvmf/common.sh@546 -- # IFS=, 00:28:06.574 11:14:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:06.574 "params": { 00:28:06.574 "name": "Nvme0", 00:28:06.574 "trtype": "tcp", 00:28:06.574 "traddr": "10.0.0.2", 00:28:06.574 "adrfam": "ipv4", 00:28:06.574 "trsvcid": "4420", 00:28:06.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.574 "hdgst": false, 00:28:06.574 "ddgst": false 00:28:06.574 }, 00:28:06.574 "method": "bdev_nvme_attach_controller" 00:28:06.574 }' 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:06.574 11:14:03 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:06.574 11:14:03 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:06.574 11:14:03 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:06.574 11:14:03 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:06.574 11:14:03 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:06.574 11:14:03 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.834 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:06.834 ... 00:28:06.834 fio-3.35 00:28:06.834 Starting 3 threads 00:28:06.834 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.411 00:28:13.411 filename0: (groupid=0, jobs=1): err= 0: pid=530132: Wed May 15 11:14:09 2024 00:28:13.411 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(173MiB/5007msec) 00:28:13.411 slat (nsec): min=5664, max=31539, avg=7935.34, stdev=1627.37 00:28:13.411 clat (usec): min=5326, max=89425, avg=10832.48, stdev=5930.88 00:28:13.411 lat (usec): min=5333, max=89431, avg=10840.41, stdev=5930.86 00:28:13.411 clat percentiles (usec): 00:28:13.411 | 1.00th=[ 6063], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7570], 00:28:13.411 | 30.00th=[ 8160], 40.00th=[ 9110], 50.00th=[10552], 60.00th=[11469], 00:28:13.411 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13698], 95.00th=[14615], 00:28:13.412 | 99.00th=[46924], 99.50th=[49021], 99.90th=[89654], 99.95th=[89654], 00:28:13.412 | 99.99th=[89654] 00:28:13.412 bw ( KiB/s): min=27136, max=39936, per=39.38%, avg=35379.20, stdev=3669.93, samples=10 00:28:13.412 iops : min= 212, max= 312, avg=276.40, stdev=28.67, samples=10 00:28:13.412 lat (msec) : 10=45.34%, 20=53.36%, 50=0.94%, 100=0.36% 00:28:13.412 cpu : usr=95.19%, sys=4.57%, ctx=8, majf=0, minf=124 00:28:13.412 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.412 issued rwts: total=1385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.412 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:13.412 filename0: (groupid=0, jobs=1): err= 0: pid=530133: Wed May 15 11:14:09 2024 00:28:13.412 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(126MiB/5045msec) 00:28:13.412 slat (nsec): min=5627, max=30799, avg=6526.35, stdev=1482.79 00:28:13.412 clat (usec): min=6529, max=93093, avg=14976.58, stdev=14101.16 00:28:13.412 lat (usec): min=6537, max=93100, avg=14983.11, stdev=14101.11 00:28:13.412 clat percentiles (usec): 00:28:13.412 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:28:13.412 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:28:13.412 | 70.00th=[10552], 80.00th=[11207], 90.00th=[50070], 95.00th=[51643], 00:28:13.412 | 99.00th=[53216], 99.50th=[53740], 99.90th=[89654], 99.95th=[92799], 00:28:13.412 | 99.99th=[92799] 00:28:13.412 bw ( KiB/s): min=16416, max=34816, per=28.64%, avg=25731.20, stdev=6009.42, samples=10 00:28:13.412 iops : min= 128, max= 272, avg=201.00, stdev=46.99, samples=10 00:28:13.412 lat (msec) : 10=51.04%, 20=36.94%, 50=2.18%, 100=9.83% 00:28:13.412 cpu : usr=96.31%, sys=2.70%, ctx=338, majf=0, minf=81 00:28:13.412 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.412 issued rwts: total=1007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.412 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:13.412 filename0: (groupid=0, jobs=1): err= 0: pid=530134: Wed May 15 11:14:09 2024 00:28:13.412 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5008msec) 00:28:13.412 slat (nsec): min=5644, max=33077, avg=6529.50, stdev=1315.78 00:28:13.412 clat (usec): min=6005, max=89445, avg=13063.07, stdev=6678.70 00:28:13.412 lat (usec): min=6011, max=89452, avg=13069.60, stdev=6678.73 00:28:13.412 clat percentiles (usec): 00:28:13.412 | 1.00th=[ 6456], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9765], 00:28:13.412 | 30.00th=[10552], 40.00th=[11338], 50.00th=[11994], 60.00th=[13173], 00:28:13.412 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15664], 95.00th=[16581], 00:28:13.412 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52167], 99.95th=[89654], 00:28:13.412 | 99.99th=[89654] 00:28:13.412 bw ( KiB/s): min=20992, max=33024, per=32.65%, avg=29337.60, stdev=3522.93, samples=10 00:28:13.412 iops : min= 164, max= 258, avg=229.20, stdev=27.52, samples=10 00:28:13.412 lat (msec) : 10=22.37%, 20=75.02%, 50=1.65%, 100=0.96% 00:28:13.412 cpu : usr=95.89%, sys=3.85%, ctx=44, majf=0, minf=82 00:28:13.412 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.412 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.412 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:13.412 00:28:13.412 Run status group 0 (all jobs): 00:28:13.412 READ: bw=87.7MiB/s (92.0MB/s), 25.0MiB/s-34.6MiB/s (26.2MB/s-36.3MB/s), io=443MiB (464MB), run=5007-5045msec 00:28:13.412 11:14:09 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:13.412 11:14:09 -- target/dif.sh@43 -- # local sub 00:28:13.412 11:14:09 -- target/dif.sh@45 -- # for sub in "$@" 00:28:13.412 11:14:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:13.412 11:14:09 -- target/dif.sh@36 -- # local sub_id=0 00:28:13.412 11:14:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:13.412 11:14:09 -- target/dif.sh@109 -- # bs=4k 00:28:13.412 11:14:09 -- target/dif.sh@109 -- # numjobs=8 00:28:13.412 11:14:09 -- target/dif.sh@109 -- # iodepth=16 00:28:13.412 11:14:09 -- target/dif.sh@109 -- # runtime= 00:28:13.412 11:14:09 -- target/dif.sh@109 -- # files=2 00:28:13.412 11:14:09 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:13.412 11:14:09 -- target/dif.sh@28 -- # local sub 00:28:13.412 11:14:09 -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.412 11:14:09 -- target/dif.sh@31 -- # create_subsystem 0 00:28:13.412 11:14:09 -- target/dif.sh@18 -- # local sub_id=0 00:28:13.412 11:14:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 bdev_null0 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 [2024-05-15 11:14:09.262969] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.412 11:14:09 -- target/dif.sh@31 -- # create_subsystem 1 00:28:13.412 11:14:09 -- target/dif.sh@18 -- # local sub_id=1 00:28:13.412 11:14:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 bdev_null1 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.412 11:14:09 -- target/dif.sh@31 -- # create_subsystem 2 00:28:13.412 11:14:09 -- target/dif.sh@18 -- # local sub_id=2 00:28:13.412 11:14:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 bdev_null2 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:13.412 11:14:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.412 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.412 11:14:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.412 11:14:09 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:13.412 11:14:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.412 11:14:09 -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.412 11:14:09 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:13.412 11:14:09 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:13.412 11:14:09 -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:13.412 11:14:09 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.412 11:14:09 -- common/autotest_common.sh@1337 -- # shift 00:28:13.412 11:14:09 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:13.412 11:14:09 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.412 11:14:09 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:13.412 11:14:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:13.412 11:14:09 -- target/dif.sh@82 -- # gen_fio_conf 00:28:13.412 11:14:09 -- nvmf/common.sh@521 -- # config=() 00:28:13.412 11:14:09 -- target/dif.sh@54 -- # local file 00:28:13.412 11:14:09 -- nvmf/common.sh@521 -- # local subsystem config 00:28:13.412 11:14:09 -- target/dif.sh@56 -- # cat 00:28:13.412 11:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:13.412 11:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:13.412 { 00:28:13.412 "params": { 00:28:13.412 "name": "Nvme$subsystem", 00:28:13.412 "trtype": "$TEST_TRANSPORT", 00:28:13.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.413 "adrfam": "ipv4", 00:28:13.413 "trsvcid": "$NVMF_PORT", 00:28:13.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.413 "hdgst": ${hdgst:-false}, 00:28:13.413 "ddgst": ${ddgst:-false} 00:28:13.413 }, 00:28:13.413 "method": "bdev_nvme_attach_controller" 00:28:13.413 } 00:28:13.413 EOF 00:28:13.413 )") 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # grep libasan 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:13.413 11:14:09 -- nvmf/common.sh@543 -- # cat 00:28:13.413 11:14:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:13.413 11:14:09 -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.413 11:14:09 -- target/dif.sh@73 -- # cat 00:28:13.413 11:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:13.413 11:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:13.413 { 00:28:13.413 "params": { 00:28:13.413 "name": "Nvme$subsystem", 00:28:13.413 "trtype": "$TEST_TRANSPORT", 00:28:13.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.413 "adrfam": "ipv4", 00:28:13.413 "trsvcid": "$NVMF_PORT", 00:28:13.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.413 "hdgst": ${hdgst:-false}, 00:28:13.413 "ddgst": ${ddgst:-false} 00:28:13.413 }, 00:28:13.413 "method": "bdev_nvme_attach_controller" 00:28:13.413 } 00:28:13.413 EOF 00:28:13.413 )") 00:28:13.413 11:14:09 -- target/dif.sh@72 -- # (( file++ )) 00:28:13.413 11:14:09 -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.413 11:14:09 -- target/dif.sh@73 -- # cat 00:28:13.413 11:14:09 -- nvmf/common.sh@543 -- # cat 00:28:13.413 11:14:09 -- target/dif.sh@72 -- # (( file++ )) 00:28:13.413 11:14:09 -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.413 11:14:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:13.413 11:14:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:13.413 { 00:28:13.413 "params": { 00:28:13.413 "name": "Nvme$subsystem", 00:28:13.413 "trtype": "$TEST_TRANSPORT", 00:28:13.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.413 "adrfam": "ipv4", 00:28:13.413 "trsvcid": "$NVMF_PORT", 00:28:13.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.413 "hdgst": ${hdgst:-false}, 00:28:13.413 "ddgst": ${ddgst:-false} 00:28:13.413 }, 00:28:13.413 "method": "bdev_nvme_attach_controller" 00:28:13.413 } 00:28:13.413 EOF 00:28:13.413 )") 00:28:13.413 11:14:09 -- nvmf/common.sh@543 -- # cat 00:28:13.413 11:14:09 -- nvmf/common.sh@545 -- # jq . 00:28:13.413 11:14:09 -- nvmf/common.sh@546 -- # IFS=, 00:28:13.413 11:14:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:13.413 "params": { 00:28:13.413 "name": "Nvme0", 00:28:13.413 "trtype": "tcp", 00:28:13.413 "traddr": "10.0.0.2", 00:28:13.413 "adrfam": "ipv4", 00:28:13.413 "trsvcid": "4420", 00:28:13.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.413 "hdgst": false, 00:28:13.413 "ddgst": false 00:28:13.413 }, 00:28:13.413 "method": "bdev_nvme_attach_controller" 00:28:13.413 },{ 00:28:13.413 "params": { 00:28:13.413 "name": "Nvme1", 00:28:13.413 "trtype": "tcp", 00:28:13.413 "traddr": "10.0.0.2", 00:28:13.413 "adrfam": "ipv4", 00:28:13.413 "trsvcid": "4420", 00:28:13.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.413 "hdgst": false, 00:28:13.413 "ddgst": false 00:28:13.413 }, 00:28:13.413 "method": "bdev_nvme_attach_controller" 00:28:13.413 },{ 00:28:13.413 "params": { 00:28:13.413 "name": "Nvme2", 00:28:13.413 "trtype": "tcp", 00:28:13.413 "traddr": "10.0.0.2", 00:28:13.413 "adrfam": "ipv4", 00:28:13.413 "trsvcid": "4420", 00:28:13.413 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:13.413 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:13.413 "hdgst": false, 00:28:13.413 "ddgst": false 00:28:13.413 }, 00:28:13.413 "method": "bdev_nvme_attach_controller" 00:28:13.413 }' 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:13.413 11:14:09 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:13.413 11:14:09 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:13.413 11:14:09 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:13.413 11:14:09 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:13.413 11:14:09 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:13.413 11:14:09 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.413 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:13.413 ... 00:28:13.413 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:13.413 ... 00:28:13.413 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:13.413 ... 00:28:13.413 fio-3.35 00:28:13.413 Starting 24 threads 00:28:13.413 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.645 00:28:25.645 filename0: (groupid=0, jobs=1): err= 0: pid=531584: Wed May 15 11:14:20 2024 00:28:25.645 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10004msec) 00:28:25.645 slat (usec): min=5, max=102, avg=28.27, stdev=15.94 00:28:25.645 clat (usec): min=15994, max=40013, avg=31606.29, stdev=944.46 00:28:25.645 lat (usec): min=16020, max=40048, avg=31634.56, stdev=944.01 00:28:25.645 clat percentiles (usec): 00:28:25.645 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:28:25.645 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.645 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.645 | 99.00th=[32900], 99.50th=[33162], 99.90th=[39584], 99.95th=[39584], 00:28:25.645 | 99.99th=[40109] 00:28:25.645 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=2006.89, stdev=61.60, samples=19 00:28:25.645 iops : min= 479, max= 512, avg=501.68, stdev=15.38, samples=19 00:28:25.645 lat (msec) : 20=0.28%, 50=99.72% 00:28:25.645 cpu : usr=99.24%, sys=0.46%, ctx=11, majf=0, minf=26 00:28:25.645 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.645 filename0: (groupid=0, jobs=1): err= 0: pid=531585: Wed May 15 11:14:20 2024 00:28:25.645 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10002msec) 00:28:25.645 slat (nsec): min=5783, max=80943, avg=16590.62, stdev=9971.84 00:28:25.645 clat (usec): min=30073, max=54163, avg=31821.64, stdev=1326.80 00:28:25.645 lat (usec): min=30082, max=54185, avg=31838.23, stdev=1326.12 00:28:25.645 clat percentiles (usec): 00:28:25.645 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:28:25.645 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:25.645 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:28:25.645 | 99.00th=[33424], 99.50th=[33817], 99.90th=[54264], 99.95th=[54264], 00:28:25.645 | 99.99th=[54264] 00:28:25.645 bw ( KiB/s): min= 1788, max= 2048, per=4.14%, avg=2000.11, stdev=76.73, samples=19 00:28:25.645 iops : min= 447, max= 512, avg=499.95, stdev=19.14, samples=19 00:28:25.645 lat (msec) : 50=99.68%, 100=0.32% 00:28:25.645 cpu : usr=99.00%, sys=0.69%, ctx=48, majf=0, minf=27 00:28:25.645 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.645 filename0: (groupid=0, jobs=1): err= 0: pid=531587: Wed May 15 11:14:20 2024 00:28:25.645 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10018msec) 00:28:25.645 slat (nsec): min=5769, max=82633, avg=21396.40, stdev=13247.75 00:28:25.645 clat (usec): min=19407, max=51025, avg=31694.31, stdev=1355.44 00:28:25.645 lat (usec): min=19414, max=51056, avg=31715.70, stdev=1355.71 00:28:25.645 clat percentiles (usec): 00:28:25.645 | 1.00th=[30278], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:28:25.645 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.645 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.645 | 99.00th=[33424], 99.50th=[33817], 99.90th=[51119], 99.95th=[51119], 00:28:25.645 | 99.99th=[51119] 00:28:25.645 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=2000.16, stdev=63.82, samples=19 00:28:25.645 iops : min= 479, max= 512, avg=500.00, stdev=15.93, samples=19 00:28:25.645 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:28:25.645 cpu : usr=98.85%, sys=0.69%, ctx=49, majf=0, minf=29 00:28:25.645 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.645 filename0: (groupid=0, jobs=1): err= 0: pid=531588: Wed May 15 11:14:20 2024 00:28:25.645 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10004msec) 00:28:25.645 slat (nsec): min=5799, max=76410, avg=25144.80, stdev=13062.03 00:28:25.645 clat (usec): min=12291, max=52676, avg=31624.38, stdev=1744.39 00:28:25.645 lat (usec): min=12298, max=52692, avg=31649.53, stdev=1744.31 00:28:25.645 clat percentiles (usec): 00:28:25.645 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31327], 20.00th=[31327], 00:28:25.645 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.645 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.645 | 99.00th=[33162], 99.50th=[33162], 99.90th=[52691], 99.95th=[52691], 00:28:25.645 | 99.99th=[52691] 00:28:25.645 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1999.47, stdev=75.41, samples=19 00:28:25.645 iops : min= 448, max= 512, avg=499.63, stdev=18.85, samples=19 00:28:25.645 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:28:25.645 cpu : usr=99.15%, sys=0.56%, ctx=14, majf=0, minf=32 00:28:25.645 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.645 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.645 filename0: (groupid=0, jobs=1): err= 0: pid=531589: Wed May 15 11:14:20 2024 00:28:25.645 read: IOPS=514, BW=2058KiB/s (2107kB/s)(20.1MiB/10016msec) 00:28:25.645 slat (nsec): min=5774, max=61515, avg=9884.82, stdev=5331.30 00:28:25.645 clat (usec): min=2354, max=33984, avg=31018.98, stdev=4055.70 00:28:25.645 lat (usec): min=2379, max=33992, avg=31028.86, stdev=4054.65 00:28:25.645 clat percentiles (usec): 00:28:25.645 | 1.00th=[ 5342], 5.00th=[30802], 10.00th=[31589], 20.00th=[31589], 00:28:25.645 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:25.645 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.645 | 99.00th=[32900], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:28:25.645 | 99.99th=[33817] 00:28:25.645 bw ( KiB/s): min= 1916, max= 2688, per=4.25%, avg=2053.95, stdev=158.37, samples=20 00:28:25.646 iops : min= 479, max= 672, avg=513.45, stdev=39.60, samples=20 00:28:25.646 lat (msec) : 4=0.49%, 10=1.38%, 20=1.24%, 50=96.89% 00:28:25.646 cpu : usr=99.06%, sys=0.65%, ctx=14, majf=0, minf=63 00:28:25.646 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename0: (groupid=0, jobs=1): err= 0: pid=531590: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=509, BW=2039KiB/s (2088kB/s)(20.0MiB/10024msec) 00:28:25.646 slat (nsec): min=5758, max=86007, avg=19305.90, stdev=14053.77 00:28:25.646 clat (usec): min=13262, max=59725, avg=31229.05, stdev=4372.33 00:28:25.646 lat (usec): min=13269, max=59760, avg=31248.35, stdev=4374.11 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[19530], 5.00th=[22152], 10.00th=[26346], 20.00th=[31327], 00:28:25.646 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.646 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[35914], 00:28:25.646 | 99.00th=[49546], 99.50th=[53216], 99.90th=[59507], 99.95th=[59507], 00:28:25.646 | 99.99th=[59507] 00:28:25.646 bw ( KiB/s): min= 1840, max= 2240, per=4.21%, avg=2036.90, stdev=97.63, samples=20 00:28:25.646 iops : min= 460, max= 560, avg=509.15, stdev=24.36, samples=20 00:28:25.646 lat (msec) : 20=1.49%, 50=97.53%, 100=0.98% 00:28:25.646 cpu : usr=98.33%, sys=0.94%, ctx=485, majf=0, minf=30 00:28:25.646 IO depths : 1=4.7%, 2=9.4%, 4=20.4%, 8=57.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename0: (groupid=0, jobs=1): err= 0: pid=531591: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10020msec) 00:28:25.646 slat (nsec): min=5663, max=96863, avg=15394.24, stdev=13773.85 00:28:25.646 clat (usec): min=19794, max=52782, avg=31789.43, stdev=1425.02 00:28:25.646 lat (usec): min=19800, max=52798, avg=31804.83, stdev=1424.31 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:28:25.646 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:25.646 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:28:25.646 | 99.00th=[33424], 99.50th=[33817], 99.90th=[52691], 99.95th=[52691], 00:28:25.646 | 99.99th=[52691] 00:28:25.646 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=2000.47, stdev=75.67, samples=19 00:28:25.646 iops : min= 448, max= 512, avg=500.00, stdev=18.99, samples=19 00:28:25.646 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:28:25.646 cpu : usr=98.77%, sys=0.69%, ctx=229, majf=0, minf=23 00:28:25.646 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename0: (groupid=0, jobs=1): err= 0: pid=531592: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10008msec) 00:28:25.646 slat (nsec): min=5679, max=87675, avg=16320.68, stdev=13231.06 00:28:25.646 clat (usec): min=12567, max=55980, avg=31774.93, stdev=2346.23 00:28:25.646 lat (usec): min=12575, max=55996, avg=31791.25, stdev=2345.88 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[23462], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:28:25.646 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:25.646 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:28:25.646 | 99.00th=[40109], 99.50th=[41681], 99.90th=[55837], 99.95th=[55837], 00:28:25.646 | 99.99th=[55837] 00:28:25.646 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=2000.11, stdev=72.00, samples=19 00:28:25.646 iops : min= 448, max= 512, avg=499.95, stdev=17.95, samples=19 00:28:25.646 lat (msec) : 20=0.20%, 50=99.48%, 100=0.32% 00:28:25.646 cpu : usr=99.13%, sys=0.55%, ctx=34, majf=0, minf=25 00:28:25.646 IO depths : 1=4.8%, 2=10.9%, 4=24.4%, 8=52.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename1: (groupid=0, jobs=1): err= 0: pid=531594: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=502, BW=2012KiB/s (2060kB/s)(19.7MiB/10020msec) 00:28:25.646 slat (nsec): min=5756, max=41849, avg=9839.51, stdev=5338.48 00:28:25.646 clat (usec): min=7729, max=36780, avg=31720.10, stdev=1365.11 00:28:25.646 lat (usec): min=7738, max=36805, avg=31729.94, stdev=1365.22 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[30278], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:28:25.646 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:25.646 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:28:25.646 | 99.00th=[32900], 99.50th=[33817], 99.90th=[34866], 99.95th=[36439], 00:28:25.646 | 99.99th=[36963] 00:28:25.646 bw ( KiB/s): min= 1916, max= 2048, per=4.16%, avg=2008.25, stdev=59.81, samples=20 00:28:25.646 iops : min= 479, max= 512, avg=501.95, stdev=14.89, samples=20 00:28:25.646 lat (msec) : 10=0.32%, 50=99.68% 00:28:25.646 cpu : usr=99.26%, sys=0.47%, ctx=8, majf=0, minf=38 00:28:25.646 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename1: (groupid=0, jobs=1): err= 0: pid=531595: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10001msec) 00:28:25.646 slat (nsec): min=5854, max=85237, avg=23301.13, stdev=12996.46 00:28:25.646 clat (usec): min=30111, max=53121, avg=31735.66, stdev=1280.17 00:28:25.646 lat (usec): min=30135, max=53140, avg=31758.96, stdev=1279.41 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:28:25.646 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.646 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.646 | 99.00th=[33424], 99.50th=[33817], 99.90th=[53216], 99.95th=[53216], 00:28:25.646 | 99.99th=[53216] 00:28:25.646 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=2000.47, stdev=75.67, samples=19 00:28:25.646 iops : min= 448, max= 512, avg=500.00, stdev=18.99, samples=19 00:28:25.646 lat (msec) : 50=99.68%, 100=0.32% 00:28:25.646 cpu : usr=98.84%, sys=0.67%, ctx=116, majf=0, minf=30 00:28:25.646 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename1: (groupid=0, jobs=1): err= 0: pid=531596: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10004msec) 00:28:25.646 slat (nsec): min=5809, max=84701, avg=26232.21, stdev=14663.40 00:28:25.646 clat (usec): min=10790, max=53026, avg=31605.66, stdev=1746.69 00:28:25.646 lat (usec): min=10796, max=53044, avg=31631.89, stdev=1746.92 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31327], 20.00th=[31327], 00:28:25.646 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.646 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.646 | 99.00th=[32900], 99.50th=[33162], 99.90th=[53216], 99.95th=[53216], 00:28:25.646 | 99.99th=[53216] 00:28:25.646 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1999.47, stdev=75.41, samples=19 00:28:25.646 iops : min= 448, max= 512, avg=499.63, stdev=18.85, samples=19 00:28:25.646 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:28:25.646 cpu : usr=99.07%, sys=0.49%, ctx=44, majf=0, minf=27 00:28:25.646 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename1: (groupid=0, jobs=1): err= 0: pid=531597: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=502, BW=2008KiB/s (2056kB/s)(19.6MiB/10007msec) 00:28:25.646 slat (nsec): min=5675, max=82430, avg=20821.71, stdev=12933.83 00:28:25.646 clat (usec): min=12358, max=55203, avg=31696.01, stdev=1959.14 00:28:25.646 lat (usec): min=12364, max=55220, avg=31716.83, stdev=1958.94 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[30278], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:28:25.646 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.646 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.646 | 99.00th=[33162], 99.50th=[40109], 99.90th=[55313], 99.95th=[55313], 00:28:25.646 | 99.99th=[55313] 00:28:25.646 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=2000.26, stdev=74.59, samples=19 00:28:25.646 iops : min= 448, max= 512, avg=499.95, stdev=18.72, samples=19 00:28:25.646 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:28:25.646 cpu : usr=99.10%, sys=0.58%, ctx=51, majf=0, minf=33 00:28:25.646 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.646 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.646 filename1: (groupid=0, jobs=1): err= 0: pid=531598: Wed May 15 11:14:20 2024 00:28:25.646 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.4MiB/10018msec) 00:28:25.646 slat (nsec): min=5693, max=42956, avg=8211.83, stdev=3105.77 00:28:25.646 clat (usec): min=2509, max=33404, avg=30655.97, stdev=4557.84 00:28:25.646 lat (usec): min=2535, max=33413, avg=30664.18, stdev=4556.44 00:28:25.646 clat percentiles (usec): 00:28:25.646 | 1.00th=[ 3884], 5.00th=[20317], 10.00th=[31327], 20.00th=[31589], 00:28:25.646 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:25.646 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.647 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33424], 99.95th=[33424], 00:28:25.647 | 99.99th=[33424] 00:28:25.647 bw ( KiB/s): min= 1916, max= 2688, per=4.30%, avg=2078.90, stdev=171.10, samples=20 00:28:25.647 iops : min= 479, max= 672, avg=519.65, stdev=42.76, samples=20 00:28:25.647 lat (msec) : 4=1.17%, 10=0.98%, 20=2.15%, 50=95.71% 00:28:25.647 cpu : usr=99.04%, sys=0.67%, ctx=14, majf=0, minf=45 00:28:25.647 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.647 filename1: (groupid=0, jobs=1): err= 0: pid=531599: Wed May 15 11:14:20 2024 00:28:25.647 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10004msec) 00:28:25.647 slat (nsec): min=5942, max=87150, avg=24971.05, stdev=14067.34 00:28:25.647 clat (usec): min=12388, max=77308, avg=31552.27, stdev=2469.15 00:28:25.647 lat (usec): min=12395, max=77324, avg=31577.24, stdev=2469.68 00:28:25.647 clat percentiles (usec): 00:28:25.647 | 1.00th=[22152], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:28:25.647 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.647 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.647 | 99.00th=[33424], 99.50th=[41681], 99.90th=[56886], 99.95th=[56886], 00:28:25.647 | 99.99th=[77071] 00:28:25.647 bw ( KiB/s): min= 1888, max= 2048, per=4.15%, avg=2004.42, stdev=63.95, samples=19 00:28:25.647 iops : min= 472, max= 512, avg=500.95, stdev=15.89, samples=19 00:28:25.647 lat (msec) : 20=0.64%, 50=99.05%, 100=0.32% 00:28:25.647 cpu : usr=98.69%, sys=0.79%, ctx=117, majf=0, minf=22 00:28:25.647 IO depths : 1=5.6%, 2=11.3%, 4=23.1%, 8=52.8%, 16=7.3%, 32=0.0%, >=64=0.0% 00:28:25.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.647 filename1: (groupid=0, jobs=1): err= 0: pid=531600: Wed May 15 11:14:20 2024 00:28:25.647 read: IOPS=503, BW=2012KiB/s (2060kB/s)(19.7MiB/10011msec) 00:28:25.647 slat (nsec): min=5629, max=71596, avg=13788.46, stdev=9466.86 00:28:25.647 clat (usec): min=10385, max=62307, avg=31692.41, stdev=3104.18 00:28:25.647 lat (usec): min=10392, max=62328, avg=31706.20, stdev=3104.67 00:28:25.647 clat percentiles (usec): 00:28:25.647 | 1.00th=[19006], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:28:25.647 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:25.647 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:28:25.647 | 99.00th=[43779], 99.50th=[45351], 99.90th=[50070], 99.95th=[50070], 00:28:25.647 | 99.99th=[62129] 00:28:25.647 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=2002.58, stdev=60.51, samples=19 00:28:25.647 iops : min= 479, max= 512, avg=500.53, stdev=15.11, samples=19 00:28:25.647 lat (msec) : 20=2.26%, 50=97.54%, 100=0.20% 00:28:25.647 cpu : usr=99.23%, sys=0.46%, ctx=62, majf=0, minf=37 00:28:25.647 IO depths : 1=5.1%, 2=11.0%, 4=23.7%, 8=52.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:28:25.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.647 filename1: (groupid=0, jobs=1): err= 0: pid=531601: Wed May 15 11:14:20 2024 00:28:25.647 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10001msec) 00:28:25.647 slat (nsec): min=5218, max=83141, avg=22190.08, stdev=13681.18 00:28:25.647 clat (usec): min=19249, max=65621, avg=31744.95, stdev=1425.24 00:28:25.647 lat (usec): min=19258, max=65641, avg=31767.14, stdev=1424.53 00:28:25.647 clat percentiles (usec): 00:28:25.647 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:28:25.647 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.647 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:28:25.647 | 99.00th=[33424], 99.50th=[33817], 99.90th=[53740], 99.95th=[53740], 00:28:25.647 | 99.99th=[65799] 00:28:25.647 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=2000.32, stdev=76.12, samples=19 00:28:25.647 iops : min= 448, max= 512, avg=500.00, stdev=18.99, samples=19 00:28:25.647 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:28:25.647 cpu : usr=98.78%, sys=0.76%, ctx=101, majf=0, minf=25 00:28:25.647 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.647 filename2: (groupid=0, jobs=1): err= 0: pid=531603: Wed May 15 11:14:20 2024 00:28:25.647 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10015msec) 00:28:25.647 slat (nsec): min=5767, max=41832, avg=8689.46, stdev=4062.45 00:28:25.647 clat (usec): min=18461, max=51773, avg=31819.06, stdev=2017.35 00:28:25.647 lat (usec): min=18468, max=51792, avg=31827.75, stdev=2017.32 00:28:25.647 clat percentiles (usec): 00:28:25.647 | 1.00th=[21627], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:28:25.647 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:28:25.647 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:28:25.647 | 99.00th=[42730], 99.50th=[44303], 99.90th=[51643], 99.95th=[51643], 00:28:25.647 | 99.99th=[51643] 00:28:25.647 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=2002.70, stdev=60.75, samples=20 00:28:25.647 iops : min= 480, max= 512, avg=500.60, stdev=15.14, samples=20 00:28:25.647 lat (msec) : 20=0.88%, 50=98.81%, 100=0.32% 00:28:25.647 cpu : usr=99.39%, sys=0.31%, ctx=40, majf=0, minf=37 00:28:25.647 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:25.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.647 filename2: (groupid=0, jobs=1): err= 0: pid=531604: Wed May 15 11:14:20 2024 00:28:25.647 read: IOPS=510, BW=2043KiB/s (2092kB/s)(20.0MiB/10005msec) 00:28:25.647 slat (nsec): min=5657, max=82396, avg=11716.25, stdev=8740.53 00:28:25.647 clat (usec): min=10860, max=63958, avg=31272.18, stdev=3656.21 00:28:25.647 lat (usec): min=10866, max=63975, avg=31283.90, stdev=3656.35 00:28:25.647 clat percentiles (usec): 00:28:25.647 | 1.00th=[18744], 5.00th=[24249], 10.00th=[27132], 20.00th=[31589], 00:28:25.647 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:28:25.647 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:28:25.647 | 99.00th=[44827], 99.50th=[46924], 99.90th=[53740], 99.95th=[53740], 00:28:25.647 | 99.99th=[63701] 00:28:25.647 bw ( KiB/s): min= 1795, max= 2240, per=4.22%, avg=2037.32, stdev=81.58, samples=19 00:28:25.647 iops : min= 448, max= 560, avg=509.05, stdev=20.49, samples=19 00:28:25.647 lat (msec) : 20=1.86%, 50=97.83%, 100=0.31% 00:28:25.647 cpu : usr=98.58%, sys=0.91%, ctx=199, majf=0, minf=42 00:28:25.647 IO depths : 1=0.1%, 2=0.9%, 4=4.1%, 8=78.0%, 16=17.0%, 32=0.0%, >=64=0.0% 00:28:25.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 complete : 0=0.0%, 4=89.9%, 8=8.7%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 issued rwts: total=5110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.647 filename2: (groupid=0, jobs=1): err= 0: pid=531605: Wed May 15 11:14:20 2024 00:28:25.647 read: IOPS=505, BW=2022KiB/s (2070kB/s)(19.8MiB/10023msec) 00:28:25.647 slat (nsec): min=5765, max=72473, avg=16019.00, stdev=9897.05 00:28:25.647 clat (usec): min=19540, max=48289, avg=31513.15, stdev=2232.84 00:28:25.647 lat (usec): min=19560, max=48299, avg=31529.17, stdev=2233.43 00:28:25.647 clat percentiles (usec): 00:28:25.647 | 1.00th=[22152], 5.00th=[28967], 10.00th=[31327], 20.00th=[31589], 00:28:25.647 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.647 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:28:25.647 | 99.00th=[38536], 99.50th=[44303], 99.90th=[47973], 99.95th=[48497], 00:28:25.647 | 99.99th=[48497] 00:28:25.647 bw ( KiB/s): min= 1916, max= 2208, per=4.18%, avg=2019.55, stdev=79.57, samples=20 00:28:25.647 iops : min= 479, max= 552, avg=504.85, stdev=19.88, samples=20 00:28:25.647 lat (msec) : 20=0.18%, 50=99.82% 00:28:25.647 cpu : usr=98.91%, sys=0.64%, ctx=34, majf=0, minf=20 00:28:25.647 IO depths : 1=5.4%, 2=11.1%, 4=23.3%, 8=52.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:28:25.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.647 issued rwts: total=5066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.647 filename2: (groupid=0, jobs=1): err= 0: pid=531606: Wed May 15 11:14:20 2024 00:28:25.647 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10004msec) 00:28:25.647 slat (nsec): min=5831, max=80859, avg=22788.29, stdev=10532.84 00:28:25.647 clat (usec): min=12302, max=52396, avg=31654.72, stdev=1727.71 00:28:25.648 lat (usec): min=12313, max=52413, avg=31677.51, stdev=1727.61 00:28:25.648 clat percentiles (usec): 00:28:25.648 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31327], 20.00th=[31327], 00:28:25.648 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.648 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.648 | 99.00th=[32900], 99.50th=[33424], 99.90th=[52167], 99.95th=[52167], 00:28:25.648 | 99.99th=[52167] 00:28:25.648 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1999.42, stdev=63.70, samples=19 00:28:25.648 iops : min= 479, max= 512, avg=499.74, stdev=15.84, samples=19 00:28:25.648 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:28:25.648 cpu : usr=99.26%, sys=0.45%, ctx=10, majf=0, minf=34 00:28:25.648 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:25.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.648 filename2: (groupid=0, jobs=1): err= 0: pid=531607: Wed May 15 11:14:20 2024 00:28:25.648 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10005msec) 00:28:25.648 slat (nsec): min=5421, max=84136, avg=21133.03, stdev=13785.70 00:28:25.648 clat (usec): min=12274, max=53914, avg=31680.13, stdev=1814.49 00:28:25.648 lat (usec): min=12295, max=53929, avg=31701.26, stdev=1814.22 00:28:25.648 clat percentiles (usec): 00:28:25.648 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31327], 20.00th=[31327], 00:28:25.648 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.648 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.648 | 99.00th=[33162], 99.50th=[33424], 99.90th=[53740], 99.95th=[53740], 00:28:25.648 | 99.99th=[53740] 00:28:25.648 bw ( KiB/s): min= 1792, max= 2064, per=4.14%, avg=1999.32, stdev=76.05, samples=19 00:28:25.648 iops : min= 448, max= 516, avg=499.63, stdev=18.90, samples=19 00:28:25.648 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:28:25.648 cpu : usr=99.25%, sys=0.47%, ctx=15, majf=0, minf=49 00:28:25.648 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:25.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.648 filename2: (groupid=0, jobs=1): err= 0: pid=531608: Wed May 15 11:14:20 2024 00:28:25.648 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10001msec) 00:28:25.648 slat (nsec): min=5775, max=81325, avg=19114.60, stdev=11696.20 00:28:25.648 clat (usec): min=19210, max=64980, avg=31786.20, stdev=1382.28 00:28:25.648 lat (usec): min=19219, max=64995, avg=31805.32, stdev=1381.61 00:28:25.648 clat percentiles (usec): 00:28:25.648 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:28:25.648 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.648 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:28:25.648 | 99.00th=[33424], 99.50th=[33817], 99.90th=[52691], 99.95th=[52691], 00:28:25.648 | 99.99th=[64750] 00:28:25.648 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=2000.47, stdev=75.67, samples=19 00:28:25.648 iops : min= 448, max= 512, avg=500.00, stdev=18.99, samples=19 00:28:25.648 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:28:25.648 cpu : usr=99.16%, sys=0.50%, ctx=42, majf=0, minf=24 00:28:25.648 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.648 filename2: (groupid=0, jobs=1): err= 0: pid=531609: Wed May 15 11:14:20 2024 00:28:25.648 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.8MiB/10027msec) 00:28:25.648 slat (nsec): min=5983, max=92206, avg=21960.72, stdev=12549.11 00:28:25.648 clat (usec): min=3538, max=34011, avg=31431.96, stdev=2428.91 00:28:25.648 lat (usec): min=3555, max=34019, avg=31453.92, stdev=2429.34 00:28:25.648 clat percentiles (usec): 00:28:25.648 | 1.00th=[20841], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:28:25.648 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.648 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:28:25.648 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:28:25.648 | 99.99th=[33817] 00:28:25.648 bw ( KiB/s): min= 1916, max= 2304, per=4.18%, avg=2021.70, stdev=89.18, samples=20 00:28:25.648 iops : min= 479, max= 576, avg=505.35, stdev=22.28, samples=20 00:28:25.648 lat (msec) : 4=0.32%, 10=0.32%, 20=0.32%, 50=99.05% 00:28:25.648 cpu : usr=99.18%, sys=0.53%, ctx=11, majf=0, minf=28 00:28:25.648 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:25.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.648 filename2: (groupid=0, jobs=1): err= 0: pid=531610: Wed May 15 11:14:20 2024 00:28:25.648 read: IOPS=502, BW=2012KiB/s (2060kB/s)(19.6MiB/10002msec) 00:28:25.648 slat (nsec): min=5770, max=87980, avg=25133.92, stdev=15040.82 00:28:25.648 clat (usec): min=16150, max=57213, avg=31601.63, stdev=2799.00 00:28:25.648 lat (usec): min=16159, max=57234, avg=31626.76, stdev=2799.33 00:28:25.648 clat percentiles (usec): 00:28:25.648 | 1.00th=[19530], 5.00th=[26870], 10.00th=[31065], 20.00th=[31327], 00:28:25.648 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:28:25.648 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[33424], 00:28:25.648 | 99.00th=[43254], 99.50th=[46924], 99.90th=[53216], 99.95th=[53216], 00:28:25.648 | 99.99th=[57410] 00:28:25.648 bw ( KiB/s): min= 1920, max= 2112, per=4.16%, avg=2009.05, stdev=62.59, samples=19 00:28:25.648 iops : min= 480, max= 528, avg=502.11, stdev=15.57, samples=19 00:28:25.648 lat (msec) : 20=1.03%, 50=98.69%, 100=0.28% 00:28:25.648 cpu : usr=99.37%, sys=0.33%, ctx=18, majf=0, minf=23 00:28:25.648 IO depths : 1=4.7%, 2=9.9%, 4=21.3%, 8=55.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:28:25.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.648 issued rwts: total=5030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:25.648 00:28:25.648 Run status group 0 (all jobs): 00:28:25.648 READ: bw=47.2MiB/s (49.5MB/s), 2003KiB/s-2083KiB/s (2051kB/s-2133kB/s), io=473MiB (496MB), run=10001-10027msec 00:28:25.648 11:14:21 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:25.648 11:14:21 -- target/dif.sh@43 -- # local sub 00:28:25.648 11:14:21 -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.648 11:14:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:25.648 11:14:21 -- target/dif.sh@36 -- # local sub_id=0 00:28:25.648 11:14:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.648 11:14:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:25.648 11:14:21 -- target/dif.sh@36 -- # local sub_id=1 00:28:25.648 11:14:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.648 11:14:21 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:25.648 11:14:21 -- target/dif.sh@36 -- # local sub_id=2 00:28:25.648 11:14:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:25.648 11:14:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:25.648 11:14:21 -- target/dif.sh@115 -- # numjobs=2 00:28:25.648 11:14:21 -- target/dif.sh@115 -- # iodepth=8 00:28:25.648 11:14:21 -- target/dif.sh@115 -- # runtime=5 00:28:25.648 11:14:21 -- target/dif.sh@115 -- # files=1 00:28:25.648 11:14:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:25.648 11:14:21 -- target/dif.sh@28 -- # local sub 00:28:25.648 11:14:21 -- target/dif.sh@30 -- # for sub in "$@" 00:28:25.648 11:14:21 -- target/dif.sh@31 -- # create_subsystem 0 00:28:25.648 11:14:21 -- target/dif.sh@18 -- # local sub_id=0 00:28:25.648 11:14:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 bdev_null0 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.648 11:14:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:25.648 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.648 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.649 [2024-05-15 11:14:21.215940] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.649 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.649 11:14:21 -- target/dif.sh@30 -- # for sub in "$@" 00:28:25.649 11:14:21 -- target/dif.sh@31 -- # create_subsystem 1 00:28:25.649 11:14:21 -- target/dif.sh@18 -- # local sub_id=1 00:28:25.649 11:14:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:25.649 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.649 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.649 bdev_null1 00:28:25.649 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.649 11:14:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:25.649 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.649 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.649 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.649 11:14:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:25.649 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.649 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.649 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.649 11:14:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.649 11:14:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.649 11:14:21 -- common/autotest_common.sh@10 -- # set +x 00:28:25.649 11:14:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.649 11:14:21 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:25.649 11:14:21 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:25.649 11:14:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:25.649 11:14:21 -- nvmf/common.sh@521 -- # config=() 00:28:25.649 11:14:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.649 11:14:21 -- nvmf/common.sh@521 -- # local subsystem config 00:28:25.649 11:14:21 -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.649 11:14:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:25.649 11:14:21 -- target/dif.sh@82 -- # gen_fio_conf 00:28:25.649 11:14:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:25.649 { 00:28:25.649 "params": { 00:28:25.649 "name": "Nvme$subsystem", 00:28:25.649 "trtype": "$TEST_TRANSPORT", 00:28:25.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.649 "adrfam": "ipv4", 00:28:25.649 "trsvcid": "$NVMF_PORT", 00:28:25.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.649 "hdgst": ${hdgst:-false}, 00:28:25.649 "ddgst": ${ddgst:-false} 00:28:25.649 }, 00:28:25.649 "method": "bdev_nvme_attach_controller" 00:28:25.649 } 00:28:25.649 EOF 00:28:25.649 )") 00:28:25.649 11:14:21 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:25.649 11:14:21 -- target/dif.sh@54 -- # local file 00:28:25.649 11:14:21 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:25.649 11:14:21 -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:25.649 11:14:21 -- target/dif.sh@56 -- # cat 00:28:25.649 11:14:21 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.649 11:14:21 -- common/autotest_common.sh@1337 -- # shift 00:28:25.649 11:14:21 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:25.649 11:14:21 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.649 11:14:21 -- nvmf/common.sh@543 -- # cat 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.649 11:14:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # grep libasan 00:28:25.649 11:14:21 -- target/dif.sh@72 -- # (( file <= files )) 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:25.649 11:14:21 -- target/dif.sh@73 -- # cat 00:28:25.649 11:14:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:25.649 11:14:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:25.649 { 00:28:25.649 "params": { 00:28:25.649 "name": "Nvme$subsystem", 00:28:25.649 "trtype": "$TEST_TRANSPORT", 00:28:25.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.649 "adrfam": "ipv4", 00:28:25.649 "trsvcid": "$NVMF_PORT", 00:28:25.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.649 "hdgst": ${hdgst:-false}, 00:28:25.649 "ddgst": ${ddgst:-false} 00:28:25.649 }, 00:28:25.649 "method": "bdev_nvme_attach_controller" 00:28:25.649 } 00:28:25.649 EOF 00:28:25.649 )") 00:28:25.649 11:14:21 -- target/dif.sh@72 -- # (( file++ )) 00:28:25.649 11:14:21 -- target/dif.sh@72 -- # (( file <= files )) 00:28:25.649 11:14:21 -- nvmf/common.sh@543 -- # cat 00:28:25.649 11:14:21 -- nvmf/common.sh@545 -- # jq . 00:28:25.649 11:14:21 -- nvmf/common.sh@546 -- # IFS=, 00:28:25.649 11:14:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:25.649 "params": { 00:28:25.649 "name": "Nvme0", 00:28:25.649 "trtype": "tcp", 00:28:25.649 "traddr": "10.0.0.2", 00:28:25.649 "adrfam": "ipv4", 00:28:25.649 "trsvcid": "4420", 00:28:25.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:25.649 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:25.649 "hdgst": false, 00:28:25.649 "ddgst": false 00:28:25.649 }, 00:28:25.649 "method": "bdev_nvme_attach_controller" 00:28:25.649 },{ 00:28:25.649 "params": { 00:28:25.649 "name": "Nvme1", 00:28:25.649 "trtype": "tcp", 00:28:25.649 "traddr": "10.0.0.2", 00:28:25.649 "adrfam": "ipv4", 00:28:25.649 "trsvcid": "4420", 00:28:25.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.649 "hdgst": false, 00:28:25.649 "ddgst": false 00:28:25.649 }, 00:28:25.649 "method": "bdev_nvme_attach_controller" 00:28:25.649 }' 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:25.649 11:14:21 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:25.649 11:14:21 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:25.649 11:14:21 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:25.649 11:14:21 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:25.649 11:14:21 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:25.649 11:14:21 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.649 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:25.649 ... 00:28:25.649 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:25.649 ... 00:28:25.649 fio-3.35 00:28:25.649 Starting 4 threads 00:28:25.649 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.933 00:28:30.933 filename0: (groupid=0, jobs=1): err= 0: pid=533919: Wed May 15 11:14:27 2024 00:28:30.933 read: IOPS=2110, BW=16.5MiB/s (17.3MB/s)(82.5MiB/5003msec) 00:28:30.933 slat (nsec): min=5612, max=32588, avg=6319.20, stdev=2034.66 00:28:30.933 clat (usec): min=1360, max=6963, avg=3772.40, stdev=598.08 00:28:30.933 lat (usec): min=1366, max=6970, avg=3778.72, stdev=597.94 00:28:30.933 clat percentiles (usec): 00:28:30.933 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3130], 20.00th=[ 3392], 00:28:30.933 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3785], 00:28:30.933 | 70.00th=[ 3818], 80.00th=[ 3982], 90.00th=[ 4555], 95.00th=[ 5080], 00:28:30.933 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6521], 99.95th=[ 6587], 00:28:30.933 | 99.99th=[ 6980] 00:28:30.933 bw ( KiB/s): min=16544, max=17216, per=25.35%, avg=16942.22, stdev=203.09, samples=9 00:28:30.933 iops : min= 2068, max= 2152, avg=2117.78, stdev=25.39, samples=9 00:28:30.933 lat (msec) : 2=0.13%, 4=80.74%, 10=19.13% 00:28:30.933 cpu : usr=97.48%, sys=2.28%, ctx=6, majf=0, minf=9 00:28:30.933 IO depths : 1=0.1%, 2=0.5%, 4=71.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.933 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.933 issued rwts: total=10560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:30.933 filename0: (groupid=0, jobs=1): err= 0: pid=533920: Wed May 15 11:14:27 2024 00:28:30.933 read: IOPS=2207, BW=17.2MiB/s (18.1MB/s)(86.3MiB/5002msec) 00:28:30.933 slat (nsec): min=5603, max=32874, avg=6191.03, stdev=1777.84 00:28:30.933 clat (usec): min=1126, max=6053, avg=3606.94, stdev=491.37 00:28:30.933 lat (usec): min=1132, max=6059, avg=3613.13, stdev=491.28 00:28:30.933 clat percentiles (usec): 00:28:30.933 | 1.00th=[ 2442], 5.00th=[ 2802], 10.00th=[ 2966], 20.00th=[ 3228], 00:28:30.933 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3785], 00:28:30.933 | 70.00th=[ 3818], 80.00th=[ 3818], 90.00th=[ 4146], 95.00th=[ 4490], 00:28:30.933 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5604], 99.95th=[ 5800], 00:28:30.933 | 99.99th=[ 6063] 00:28:30.933 bw ( KiB/s): min=17024, max=18144, per=26.29%, avg=17576.89, stdev=400.97, samples=9 00:28:30.933 iops : min= 2128, max= 2268, avg=2197.11, stdev=50.12, samples=9 00:28:30.933 lat (msec) : 2=0.20%, 4=87.42%, 10=12.38% 00:28:30.933 cpu : usr=97.62%, sys=2.16%, ctx=11, majf=0, minf=9 00:28:30.933 IO depths : 1=0.1%, 2=2.6%, 4=67.8%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.933 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.933 issued rwts: total=11043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:30.933 filename1: (groupid=0, jobs=1): err= 0: pid=533921: Wed May 15 11:14:27 2024 00:28:30.933 read: IOPS=2031, BW=15.9MiB/s (16.6MB/s)(79.4MiB/5002msec) 00:28:30.933 slat (nsec): min=5604, max=29795, avg=6313.43, stdev=2042.44 00:28:30.933 clat (usec): min=1392, max=6963, avg=3921.81, stdev=631.11 00:28:30.933 lat (usec): min=1400, max=6969, avg=3928.12, stdev=630.99 00:28:30.933 clat percentiles (usec): 00:28:30.933 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3556], 00:28:30.933 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:28:30.933 | 70.00th=[ 3884], 80.00th=[ 4113], 90.00th=[ 4686], 95.00th=[ 5538], 00:28:30.933 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6587], 99.95th=[ 6652], 00:28:30.933 | 99.99th=[ 6980] 00:28:30.933 bw ( KiB/s): min=15792, max=16624, per=24.34%, avg=16272.00, stdev=313.13, samples=9 00:28:30.933 iops : min= 1974, max= 2078, avg=2034.00, stdev=39.14, samples=9 00:28:30.934 lat (msec) : 2=0.10%, 4=73.39%, 10=26.51% 00:28:30.934 cpu : usr=97.90%, sys=1.88%, ctx=5, majf=0, minf=9 00:28:30.934 IO depths : 1=0.1%, 2=0.1%, 4=71.3%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.934 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.934 issued rwts: total=10161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:30.934 filename1: (groupid=0, jobs=1): err= 0: pid=533922: Wed May 15 11:14:27 2024 00:28:30.934 read: IOPS=2006, BW=15.7MiB/s (16.4MB/s)(78.4MiB/5003msec) 00:28:30.934 slat (nsec): min=2704, max=27927, avg=6052.39, stdev=1116.74 00:28:30.934 clat (usec): min=2350, max=44239, avg=3969.56, stdev=1276.48 00:28:30.934 lat (usec): min=2356, max=44248, avg=3975.61, stdev=1276.40 00:28:30.934 clat percentiles (usec): 00:28:30.934 | 1.00th=[ 3130], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3589], 00:28:30.934 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:28:30.934 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 4555], 95.00th=[ 5473], 00:28:30.934 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[44303], 00:28:30.934 | 99.99th=[44303] 00:28:30.934 bw ( KiB/s): min=14640, max=16352, per=24.02%, avg=16058.67, stdev=549.04, samples=9 00:28:30.934 iops : min= 1830, max= 2044, avg=2007.33, stdev=68.63, samples=9 00:28:30.934 lat (msec) : 4=72.69%, 10=27.23%, 50=0.08% 00:28:30.934 cpu : usr=97.68%, sys=2.06%, ctx=32, majf=0, minf=0 00:28:30.934 IO depths : 1=0.2%, 2=0.5%, 4=73.2%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.934 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.934 issued rwts: total=10038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:30.934 00:28:30.934 Run status group 0 (all jobs): 00:28:30.934 READ: bw=65.3MiB/s (68.4MB/s), 15.7MiB/s-17.2MiB/s (16.4MB/s-18.1MB/s), io=327MiB (342MB), run=5002-5003msec 00:28:30.934 11:14:27 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:30.934 11:14:27 -- target/dif.sh@43 -- # local sub 00:28:30.934 11:14:27 -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.934 11:14:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:30.934 11:14:27 -- target/dif.sh@36 -- # local sub_id=0 00:28:30.934 11:14:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:30.934 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.934 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.934 11:14:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:30.934 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.934 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.934 11:14:27 -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.934 11:14:27 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:30.934 11:14:27 -- target/dif.sh@36 -- # local sub_id=1 00:28:30.934 11:14:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.934 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.934 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.934 11:14:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:30.934 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.934 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.934 00:28:30.934 real 0m24.609s 00:28:30.934 user 5m16.655s 00:28:30.934 sys 0m3.396s 00:28:30.934 11:14:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.934 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 ************************************ 00:28:30.934 END TEST fio_dif_rand_params 00:28:30.934 ************************************ 00:28:31.194 11:14:27 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:31.194 11:14:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:31.194 11:14:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:31.194 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:31.194 ************************************ 00:28:31.194 START TEST fio_dif_digest 00:28:31.194 ************************************ 00:28:31.194 11:14:27 -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:28:31.194 11:14:27 -- target/dif.sh@123 -- # local NULL_DIF 00:28:31.194 11:14:27 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:31.194 11:14:27 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:31.195 11:14:27 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:31.195 11:14:27 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:31.195 11:14:27 -- target/dif.sh@127 -- # numjobs=3 00:28:31.195 11:14:27 -- target/dif.sh@127 -- # iodepth=3 00:28:31.195 11:14:27 -- target/dif.sh@127 -- # runtime=10 00:28:31.195 11:14:27 -- target/dif.sh@128 -- # hdgst=true 00:28:31.195 11:14:27 -- target/dif.sh@128 -- # ddgst=true 00:28:31.195 11:14:27 -- target/dif.sh@130 -- # create_subsystems 0 00:28:31.195 11:14:27 -- target/dif.sh@28 -- # local sub 00:28:31.195 11:14:27 -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.195 11:14:27 -- target/dif.sh@31 -- # create_subsystem 0 00:28:31.195 11:14:27 -- target/dif.sh@18 -- # local sub_id=0 00:28:31.195 11:14:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:31.195 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.195 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:31.195 bdev_null0 00:28:31.195 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.195 11:14:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:31.195 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.195 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:31.195 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.195 11:14:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:31.195 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.195 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:31.195 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.195 11:14:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:31.195 11:14:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.195 11:14:27 -- common/autotest_common.sh@10 -- # set +x 00:28:31.195 [2024-05-15 11:14:27.685382] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.195 11:14:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.195 11:14:27 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:31.195 11:14:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.195 11:14:27 -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.195 11:14:27 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:31.195 11:14:27 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.195 11:14:27 -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:31.195 11:14:27 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.195 11:14:27 -- common/autotest_common.sh@1337 -- # shift 00:28:31.195 11:14:27 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:31.195 11:14:27 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.195 11:14:27 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:31.195 11:14:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:31.195 11:14:27 -- target/dif.sh@82 -- # gen_fio_conf 00:28:31.195 11:14:27 -- nvmf/common.sh@521 -- # config=() 00:28:31.195 11:14:27 -- target/dif.sh@54 -- # local file 00:28:31.195 11:14:27 -- nvmf/common.sh@521 -- # local subsystem config 00:28:31.195 11:14:27 -- target/dif.sh@56 -- # cat 00:28:31.195 11:14:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:31.195 11:14:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:31.195 { 00:28:31.195 "params": { 00:28:31.195 "name": "Nvme$subsystem", 00:28:31.195 "trtype": "$TEST_TRANSPORT", 00:28:31.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.195 "adrfam": "ipv4", 00:28:31.195 "trsvcid": "$NVMF_PORT", 00:28:31.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.195 "hdgst": ${hdgst:-false}, 00:28:31.195 "ddgst": ${ddgst:-false} 00:28:31.195 }, 00:28:31.195 "method": "bdev_nvme_attach_controller" 00:28:31.195 } 00:28:31.195 EOF 00:28:31.195 )") 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # grep libasan 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:31.195 11:14:27 -- nvmf/common.sh@543 -- # cat 00:28:31.195 11:14:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:31.195 11:14:27 -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.195 11:14:27 -- nvmf/common.sh@545 -- # jq . 00:28:31.195 11:14:27 -- nvmf/common.sh@546 -- # IFS=, 00:28:31.195 11:14:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:31.195 "params": { 00:28:31.195 "name": "Nvme0", 00:28:31.195 "trtype": "tcp", 00:28:31.195 "traddr": "10.0.0.2", 00:28:31.195 "adrfam": "ipv4", 00:28:31.195 "trsvcid": "4420", 00:28:31.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.195 "hdgst": true, 00:28:31.195 "ddgst": true 00:28:31.195 }, 00:28:31.195 "method": "bdev_nvme_attach_controller" 00:28:31.195 }' 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:31.195 11:14:27 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:31.195 11:14:27 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:31.195 11:14:27 -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:31.195 11:14:27 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:31.195 11:14:27 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:31.195 11:14:27 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.455 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:31.455 ... 00:28:31.455 fio-3.35 00:28:31.455 Starting 3 threads 00:28:31.716 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.005 00:28:44.005 filename0: (groupid=0, jobs=1): err= 0: pid=535435: Wed May 15 11:14:38 2024 00:28:44.005 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(231MiB/10040msec) 00:28:44.005 slat (nsec): min=5995, max=36648, avg=7603.38, stdev=1724.07 00:28:44.005 clat (usec): min=7612, max=95195, avg=16295.78, stdev=11455.51 00:28:44.005 lat (usec): min=7619, max=95202, avg=16303.39, stdev=11455.50 00:28:44.005 clat percentiles (usec): 00:28:44.005 | 1.00th=[ 8848], 5.00th=[11076], 10.00th=[11600], 20.00th=[12256], 00:28:44.005 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:28:44.005 | 70.00th=[13829], 80.00th=[14222], 90.00th=[15533], 95.00th=[52691], 00:28:44.005 | 99.00th=[55313], 99.50th=[55313], 99.90th=[94897], 99.95th=[94897], 00:28:44.005 | 99.99th=[94897] 00:28:44.005 bw ( KiB/s): min=14592, max=27136, per=28.58%, avg=23603.20, stdev=2982.67, samples=20 00:28:44.005 iops : min= 114, max= 212, avg=184.40, stdev=23.30, samples=20 00:28:44.005 lat (msec) : 10=2.54%, 20=89.55%, 50=0.05%, 100=7.85% 00:28:44.005 cpu : usr=95.35%, sys=4.41%, ctx=25, majf=0, minf=94 00:28:44.005 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:44.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.005 issued rwts: total=1847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.005 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:44.005 filename0: (groupid=0, jobs=1): err= 0: pid=535436: Wed May 15 11:14:38 2024 00:28:44.005 read: IOPS=237, BW=29.7MiB/s (31.2MB/s)(299MiB/10047msec) 00:28:44.005 slat (nsec): min=5939, max=40901, avg=7067.43, stdev=1610.50 00:28:44.005 clat (usec): min=6597, max=55129, avg=12595.08, stdev=2694.42 00:28:44.005 lat (usec): min=6608, max=55138, avg=12602.15, stdev=2694.42 00:28:44.005 clat percentiles (usec): 00:28:44.005 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:28:44.005 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 00:28:44.005 | 70.00th=[13698], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:28:44.005 | 99.00th=[16057], 99.50th=[16712], 99.90th=[52691], 99.95th=[53740], 00:28:44.005 | 99.99th=[55313] 00:28:44.005 bw ( KiB/s): min=27648, max=34048, per=36.98%, avg=30540.80, stdev=1347.22, samples=20 00:28:44.005 iops : min= 216, max= 266, avg=238.60, stdev=10.53, samples=20 00:28:44.005 lat (msec) : 10=17.09%, 20=82.71%, 50=0.04%, 100=0.17% 00:28:44.005 cpu : usr=95.15%, sys=4.61%, ctx=20, majf=0, minf=198 00:28:44.005 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:44.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.005 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.005 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:44.005 filename0: (groupid=0, jobs=1): err= 0: pid=535437: Wed May 15 11:14:38 2024 00:28:44.005 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(281MiB/10045msec) 00:28:44.005 slat (nsec): min=5817, max=32160, avg=7405.98, stdev=1414.54 00:28:44.005 clat (usec): min=7809, max=56864, avg=13382.44, stdev=3869.18 00:28:44.005 lat (usec): min=7818, max=56874, avg=13389.84, stdev=3869.39 00:28:44.005 clat percentiles (usec): 00:28:44.005 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10683], 00:28:44.005 | 30.00th=[12256], 40.00th=[13173], 50.00th=[13698], 60.00th=[14091], 00:28:44.005 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:28:44.005 | 99.00th=[17171], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:28:44.005 | 99.99th=[56886] 00:28:44.005 bw ( KiB/s): min=25856, max=33024, per=34.80%, avg=28736.00, stdev=1575.63, samples=20 00:28:44.005 iops : min= 202, max= 258, avg=224.50, stdev=12.31, samples=20 00:28:44.005 lat (msec) : 10=12.91%, 20=86.47%, 50=0.09%, 100=0.53% 00:28:44.005 cpu : usr=95.56%, sys=4.20%, ctx=17, majf=0, minf=159 00:28:44.005 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:44.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.005 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.005 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:44.005 00:28:44.005 Run status group 0 (all jobs): 00:28:44.005 READ: bw=80.6MiB/s (84.6MB/s), 23.0MiB/s-29.7MiB/s (24.1MB/s-31.2MB/s), io=810MiB (850MB), run=10040-10047msec 00:28:44.005 11:14:38 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:44.006 11:14:38 -- target/dif.sh@43 -- # local sub 00:28:44.006 11:14:38 -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.006 11:14:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:44.006 11:14:38 -- target/dif.sh@36 -- # local sub_id=0 00:28:44.006 11:14:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:44.006 11:14:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.006 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:28:44.006 11:14:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.006 11:14:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:44.006 11:14:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.006 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:28:44.006 11:14:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.006 00:28:44.006 real 0m11.212s 00:28:44.006 user 0m40.525s 00:28:44.006 sys 0m1.669s 00:28:44.006 11:14:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:44.006 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:28:44.006 ************************************ 00:28:44.006 END TEST fio_dif_digest 00:28:44.006 ************************************ 00:28:44.006 11:14:38 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:44.006 11:14:38 -- target/dif.sh@147 -- # nvmftestfini 00:28:44.006 11:14:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:44.006 11:14:38 -- nvmf/common.sh@117 -- # sync 00:28:44.006 11:14:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:44.006 11:14:38 -- nvmf/common.sh@120 -- # set +e 00:28:44.006 11:14:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:44.006 11:14:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:44.006 rmmod nvme_tcp 00:28:44.006 rmmod nvme_fabrics 00:28:44.006 rmmod nvme_keyring 00:28:44.006 11:14:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:44.006 11:14:38 -- nvmf/common.sh@124 -- # set -e 00:28:44.006 11:14:38 -- nvmf/common.sh@125 -- # return 0 00:28:44.006 11:14:38 -- nvmf/common.sh@478 -- # '[' -n 524882 ']' 00:28:44.006 11:14:38 -- nvmf/common.sh@479 -- # killprocess 524882 00:28:44.006 11:14:38 -- common/autotest_common.sh@946 -- # '[' -z 524882 ']' 00:28:44.006 11:14:38 -- common/autotest_common.sh@950 -- # kill -0 524882 00:28:44.006 11:14:38 -- common/autotest_common.sh@951 -- # uname 00:28:44.006 11:14:38 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:44.006 11:14:38 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 524882 00:28:44.006 11:14:39 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:44.006 11:14:39 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:44.006 11:14:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 524882' 00:28:44.006 killing process with pid 524882 00:28:44.006 11:14:39 -- common/autotest_common.sh@965 -- # kill 524882 00:28:44.006 [2024-05-15 11:14:39.027810] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:44.006 11:14:39 -- common/autotest_common.sh@970 -- # wait 524882 00:28:44.006 11:14:39 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:44.006 11:14:39 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:45.915 Waiting for block devices as requested 00:28:45.915 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:45.915 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:45.915 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:45.915 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:46.175 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:46.175 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:46.175 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:46.175 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:46.435 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:46.435 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:46.695 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:46.695 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:46.695 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:46.695 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:46.955 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:46.955 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:46.955 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:47.216 11:14:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:47.216 11:14:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:47.216 11:14:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:47.216 11:14:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:47.216 11:14:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.216 11:14:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:47.216 11:14:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.758 11:14:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:49.758 00:28:49.758 real 1m16.838s 00:28:49.758 user 7m57.225s 00:28:49.758 sys 0m18.808s 00:28:49.758 11:14:45 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:49.758 11:14:45 -- common/autotest_common.sh@10 -- # set +x 00:28:49.758 ************************************ 00:28:49.758 END TEST nvmf_dif 00:28:49.758 ************************************ 00:28:49.758 11:14:45 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:49.758 11:14:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:49.758 11:14:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:49.758 11:14:45 -- common/autotest_common.sh@10 -- # set +x 00:28:49.758 ************************************ 00:28:49.758 START TEST nvmf_abort_qd_sizes 00:28:49.758 ************************************ 00:28:49.758 11:14:45 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:49.758 * Looking for test storage... 00:28:49.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.758 11:14:46 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.758 11:14:46 -- nvmf/common.sh@7 -- # uname -s 00:28:49.758 11:14:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.759 11:14:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.759 11:14:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.759 11:14:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.759 11:14:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.759 11:14:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.759 11:14:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.759 11:14:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.759 11:14:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.759 11:14:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.759 11:14:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.759 11:14:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.759 11:14:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.759 11:14:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.759 11:14:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.759 11:14:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.759 11:14:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.759 11:14:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.759 11:14:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.759 11:14:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.759 11:14:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.759 11:14:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.759 11:14:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.759 11:14:46 -- paths/export.sh@5 -- # export PATH 00:28:49.759 11:14:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.759 11:14:46 -- nvmf/common.sh@47 -- # : 0 00:28:49.759 11:14:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.759 11:14:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.759 11:14:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.759 11:14:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.759 11:14:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.759 11:14:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.759 11:14:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.759 11:14:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.759 11:14:46 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:49.759 11:14:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:49.759 11:14:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.759 11:14:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:49.759 11:14:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:49.759 11:14:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:49.759 11:14:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.759 11:14:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:49.759 11:14:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.759 11:14:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:49.759 11:14:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:49.759 11:14:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.759 11:14:46 -- common/autotest_common.sh@10 -- # set +x 00:28:56.341 11:14:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:56.341 11:14:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:56.341 11:14:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:56.341 11:14:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:56.341 11:14:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:56.341 11:14:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:56.341 11:14:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:56.341 11:14:52 -- nvmf/common.sh@295 -- # net_devs=() 00:28:56.341 11:14:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:56.341 11:14:52 -- nvmf/common.sh@296 -- # e810=() 00:28:56.341 11:14:52 -- nvmf/common.sh@296 -- # local -ga e810 00:28:56.341 11:14:52 -- nvmf/common.sh@297 -- # x722=() 00:28:56.341 11:14:52 -- nvmf/common.sh@297 -- # local -ga x722 00:28:56.341 11:14:52 -- nvmf/common.sh@298 -- # mlx=() 00:28:56.341 11:14:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:56.341 11:14:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.341 11:14:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:56.341 11:14:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:56.341 11:14:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:56.341 11:14:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.341 11:14:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:56.341 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:56.341 11:14:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.341 11:14:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:56.341 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:56.341 11:14:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:56.341 11:14:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.341 11:14:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.341 11:14:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:56.341 11:14:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.341 11:14:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:56.341 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:56.341 11:14:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.341 11:14:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.341 11:14:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.341 11:14:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:56.341 11:14:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.341 11:14:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:56.341 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:56.341 11:14:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.341 11:14:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:56.341 11:14:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:56.341 11:14:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:56.341 11:14:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:56.341 11:14:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.341 11:14:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.341 11:14:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.341 11:14:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:56.341 11:14:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.341 11:14:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.341 11:14:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:56.341 11:14:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.341 11:14:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.341 11:14:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:56.341 11:14:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:56.341 11:14:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.341 11:14:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.341 11:14:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.341 11:14:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.341 11:14:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:56.341 11:14:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.603 11:14:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.603 11:14:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.603 11:14:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:56.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:28:56.603 00:28:56.603 --- 10.0.0.2 ping statistics --- 00:28:56.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.603 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:28:56.603 11:14:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:28:56.603 00:28:56.603 --- 10.0.0.1 ping statistics --- 00:28:56.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.603 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:28:56.603 11:14:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.603 11:14:53 -- nvmf/common.sh@411 -- # return 0 00:28:56.603 11:14:53 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:56.603 11:14:53 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:59.904 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:59.904 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:00.479 11:14:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.479 11:14:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:00.479 11:14:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:00.479 11:14:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.479 11:14:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:00.479 11:14:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:00.479 11:14:56 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:00.479 11:14:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:00.479 11:14:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:00.479 11:14:56 -- common/autotest_common.sh@10 -- # set +x 00:29:00.479 11:14:56 -- nvmf/common.sh@470 -- # nvmfpid=544841 00:29:00.479 11:14:56 -- nvmf/common.sh@471 -- # waitforlisten 544841 00:29:00.479 11:14:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:00.479 11:14:56 -- common/autotest_common.sh@827 -- # '[' -z 544841 ']' 00:29:00.479 11:14:56 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.479 11:14:56 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:00.479 11:14:56 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.479 11:14:56 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:00.479 11:14:56 -- common/autotest_common.sh@10 -- # set +x 00:29:00.479 [2024-05-15 11:14:56.962817] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:29:00.479 [2024-05-15 11:14:56.962880] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.479 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.479 [2024-05-15 11:14:57.032176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.479 [2024-05-15 11:14:57.107527] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.479 [2024-05-15 11:14:57.107566] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.479 [2024-05-15 11:14:57.107574] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.479 [2024-05-15 11:14:57.107581] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.479 [2024-05-15 11:14:57.107589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.479 [2024-05-15 11:14:57.107683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.479 [2024-05-15 11:14:57.107772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.479 [2024-05-15 11:14:57.108235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.479 [2024-05-15 11:14:57.108235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.419 11:14:57 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:01.419 11:14:57 -- common/autotest_common.sh@860 -- # return 0 00:29:01.419 11:14:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:01.419 11:14:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.419 11:14:57 -- common/autotest_common.sh@10 -- # set +x 00:29:01.419 11:14:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.419 11:14:57 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:01.419 11:14:57 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:01.419 11:14:57 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:01.419 11:14:57 -- scripts/common.sh@309 -- # local bdf bdfs 00:29:01.419 11:14:57 -- scripts/common.sh@310 -- # local nvmes 00:29:01.419 11:14:57 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:29:01.419 11:14:57 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:01.419 11:14:57 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:01.419 11:14:57 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:29:01.419 11:14:57 -- scripts/common.sh@320 -- # uname -s 00:29:01.419 11:14:57 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:01.419 11:14:57 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:01.419 11:14:57 -- scripts/common.sh@325 -- # (( 1 )) 00:29:01.419 11:14:57 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:29:01.419 11:14:57 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:01.420 11:14:57 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:29:01.420 11:14:57 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:01.420 11:14:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:01.420 11:14:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:01.420 11:14:57 -- common/autotest_common.sh@10 -- # set +x 00:29:01.420 ************************************ 00:29:01.420 START TEST spdk_target_abort 00:29:01.420 ************************************ 00:29:01.420 11:14:57 -- common/autotest_common.sh@1121 -- # spdk_target 00:29:01.420 11:14:57 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:01.420 11:14:57 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:29:01.420 11:14:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.420 11:14:57 -- common/autotest_common.sh@10 -- # set +x 00:29:01.682 spdk_targetn1 00:29:01.682 11:14:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.682 11:14:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.682 11:14:58 -- common/autotest_common.sh@10 -- # set +x 00:29:01.682 [2024-05-15 11:14:58.148498] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.682 11:14:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:01.682 11:14:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.682 11:14:58 -- common/autotest_common.sh@10 -- # set +x 00:29:01.682 11:14:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:01.682 11:14:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.682 11:14:58 -- common/autotest_common.sh@10 -- # set +x 00:29:01.682 11:14:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:01.682 11:14:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.682 11:14:58 -- common/autotest_common.sh@10 -- # set +x 00:29:01.682 [2024-05-15 11:14:58.188554] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:01.682 [2024-05-15 11:14:58.188775] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.682 11:14:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:01.682 11:14:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:01.682 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.682 [2024-05-15 11:14:58.304555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:384 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:29:01.682 [2024-05-15 11:14:58.304576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0031 p:1 m:0 dnr:0 00:29:01.682 [2024-05-15 11:14:58.311846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:608 len:8 PRP1 0x2000078be000 PRP2 0x0 00:29:01.682 [2024-05-15 11:14:58.311862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004e p:1 m:0 dnr:0 00:29:01.682 [2024-05-15 11:14:58.318991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:816 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:29:01.682 [2024-05-15 11:14:58.319004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0068 p:1 m:0 dnr:0 00:29:01.943 [2024-05-15 11:14:58.357975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2296 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:29:01.943 [2024-05-15 11:14:58.357992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:05.246 Initializing NVMe Controllers 00:29:05.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:05.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:05.246 Initialization complete. Launching workers. 00:29:05.246 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13120, failed: 4 00:29:05.246 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2891, failed to submit 10233 00:29:05.246 success 730, unsuccess 2161, failed 0 00:29:05.246 11:15:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:05.246 11:15:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:05.246 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.246 [2024-05-15 11:15:01.463579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:832 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:29:05.246 [2024-05-15 11:15:01.463624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:29:05.246 [2024-05-15 11:15:01.557579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:2976 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:29:05.246 [2024-05-15 11:15:01.557606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.629 [2024-05-15 11:15:03.047192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:36832 len:8 PRP1 0x200007c56000 PRP2 0x0 00:29:06.629 [2024-05-15 11:15:03.047230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.013 Initializing NVMe Controllers 00:29:08.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:08.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:08.013 Initialization complete. Launching workers. 00:29:08.013 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8561, failed: 3 00:29:08.013 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 7318 00:29:08.013 success 342, unsuccess 904, failed 0 00:29:08.013 11:15:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:08.013 11:15:04 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:08.013 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.394 [2024-05-15 11:15:05.961402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:180 nsid:1 lba:138248 len:8 PRP1 0x2000078f2000 PRP2 0x0 00:29:09.394 [2024-05-15 11:15:05.961433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:180 cdw0:0 sqhd:00cb p:1 m:0 dnr:0 00:29:11.307 Initializing NVMe Controllers 00:29:11.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:11.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:11.307 Initialization complete. Launching workers. 00:29:11.307 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43723, failed: 1 00:29:11.307 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2718, failed to submit 41006 00:29:11.307 success 572, unsuccess 2146, failed 0 00:29:11.307 11:15:07 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:11.307 11:15:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.307 11:15:07 -- common/autotest_common.sh@10 -- # set +x 00:29:11.307 11:15:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.307 11:15:07 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:11.307 11:15:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.307 11:15:07 -- common/autotest_common.sh@10 -- # set +x 00:29:13.218 11:15:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.218 11:15:09 -- target/abort_qd_sizes.sh@61 -- # killprocess 544841 00:29:13.218 11:15:09 -- common/autotest_common.sh@946 -- # '[' -z 544841 ']' 00:29:13.218 11:15:09 -- common/autotest_common.sh@950 -- # kill -0 544841 00:29:13.218 11:15:09 -- common/autotest_common.sh@951 -- # uname 00:29:13.218 11:15:09 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:13.218 11:15:09 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 544841 00:29:13.218 11:15:09 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:13.218 11:15:09 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:13.218 11:15:09 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 544841' 00:29:13.218 killing process with pid 544841 00:29:13.218 11:15:09 -- common/autotest_common.sh@965 -- # kill 544841 00:29:13.218 [2024-05-15 11:15:09.720883] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:13.218 11:15:09 -- common/autotest_common.sh@970 -- # wait 544841 00:29:13.218 00:29:13.218 real 0m12.011s 00:29:13.218 user 0m49.115s 00:29:13.218 sys 0m1.636s 00:29:13.218 11:15:09 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:13.218 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:29:13.218 ************************************ 00:29:13.218 END TEST spdk_target_abort 00:29:13.218 ************************************ 00:29:13.478 11:15:09 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:13.478 11:15:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:13.478 11:15:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:13.478 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:29:13.478 ************************************ 00:29:13.478 START TEST kernel_target_abort 00:29:13.478 ************************************ 00:29:13.478 11:15:09 -- common/autotest_common.sh@1121 -- # kernel_target 00:29:13.478 11:15:09 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:13.478 11:15:09 -- nvmf/common.sh@717 -- # local ip 00:29:13.478 11:15:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:13.478 11:15:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:13.478 11:15:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.478 11:15:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.478 11:15:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:13.478 11:15:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.478 11:15:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:13.478 11:15:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:13.478 11:15:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:13.478 11:15:09 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:13.478 11:15:09 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:13.478 11:15:09 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:13.478 11:15:09 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:13.478 11:15:09 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:13.478 11:15:09 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:13.478 11:15:09 -- nvmf/common.sh@628 -- # local block nvme 00:29:13.478 11:15:09 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:13.478 11:15:09 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:13.478 11:15:09 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:13.478 11:15:09 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:16.775 Waiting for block devices as requested 00:29:16.775 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:16.775 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:17.035 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:17.035 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:17.035 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:17.296 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:17.296 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:17.296 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:17.296 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:17.558 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:17.558 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:17.819 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:17.819 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:17.819 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:17.819 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:18.079 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:18.079 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:18.339 11:15:14 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:18.339 11:15:14 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:18.339 11:15:14 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:18.339 11:15:14 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:29:18.339 11:15:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:18.339 11:15:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:29:18.339 11:15:14 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:18.339 11:15:14 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:18.339 11:15:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:18.339 No valid GPT data, bailing 00:29:18.339 11:15:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:18.339 11:15:14 -- scripts/common.sh@391 -- # pt= 00:29:18.339 11:15:14 -- scripts/common.sh@392 -- # return 1 00:29:18.339 11:15:14 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:18.339 11:15:14 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:29:18.339 11:15:14 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:18.339 11:15:14 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:18.339 11:15:14 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:18.339 11:15:14 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:18.339 11:15:14 -- nvmf/common.sh@656 -- # echo 1 00:29:18.339 11:15:14 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:29:18.339 11:15:14 -- nvmf/common.sh@658 -- # echo 1 00:29:18.340 11:15:14 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:18.340 11:15:14 -- nvmf/common.sh@661 -- # echo tcp 00:29:18.340 11:15:14 -- nvmf/common.sh@662 -- # echo 4420 00:29:18.340 11:15:14 -- nvmf/common.sh@663 -- # echo ipv4 00:29:18.340 11:15:14 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:18.340 11:15:14 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:18.599 00:29:18.599 Discovery Log Number of Records 2, Generation counter 2 00:29:18.599 =====Discovery Log Entry 0====== 00:29:18.599 trtype: tcp 00:29:18.599 adrfam: ipv4 00:29:18.599 subtype: current discovery subsystem 00:29:18.599 treq: not specified, sq flow control disable supported 00:29:18.599 portid: 1 00:29:18.599 trsvcid: 4420 00:29:18.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:18.599 traddr: 10.0.0.1 00:29:18.599 eflags: none 00:29:18.599 sectype: none 00:29:18.599 =====Discovery Log Entry 1====== 00:29:18.599 trtype: tcp 00:29:18.599 adrfam: ipv4 00:29:18.600 subtype: nvme subsystem 00:29:18.600 treq: not specified, sq flow control disable supported 00:29:18.600 portid: 1 00:29:18.600 trsvcid: 4420 00:29:18.600 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:18.600 traddr: 10.0.0.1 00:29:18.600 eflags: none 00:29:18.600 sectype: none 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:18.600 11:15:15 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:18.600 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.898 Initializing NVMe Controllers 00:29:21.898 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:21.898 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:21.898 Initialization complete. Launching workers. 00:29:21.898 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69359, failed: 0 00:29:21.898 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 69359, failed to submit 0 00:29:21.898 success 0, unsuccess 69359, failed 0 00:29:21.898 11:15:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:21.898 11:15:18 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.898 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.195 Initializing NVMe Controllers 00:29:25.195 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:25.195 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:25.195 Initialization complete. Launching workers. 00:29:25.195 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 110802, failed: 0 00:29:25.195 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27902, failed to submit 82900 00:29:25.195 success 0, unsuccess 27902, failed 0 00:29:25.195 11:15:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:25.195 11:15:21 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.195 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.738 Initializing NVMe Controllers 00:29:27.738 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:27.738 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:27.738 Initialization complete. Launching workers. 00:29:27.738 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105674, failed: 0 00:29:27.738 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26418, failed to submit 79256 00:29:27.738 success 0, unsuccess 26418, failed 0 00:29:27.738 11:15:24 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:27.738 11:15:24 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:27.738 11:15:24 -- nvmf/common.sh@675 -- # echo 0 00:29:27.738 11:15:24 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:27.738 11:15:24 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:27.738 11:15:24 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:27.738 11:15:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:27.738 11:15:24 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:27.739 11:15:24 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:27.739 11:15:24 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:31.048 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:31.049 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:31.309 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:31.309 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:33.216 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:33.216 00:29:33.216 real 0m19.872s 00:29:33.216 user 0m9.908s 00:29:33.216 sys 0m5.725s 00:29:33.216 11:15:29 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:33.216 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:33.216 ************************************ 00:29:33.216 END TEST kernel_target_abort 00:29:33.216 ************************************ 00:29:33.216 11:15:29 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:33.216 11:15:29 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:33.216 11:15:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:33.216 11:15:29 -- nvmf/common.sh@117 -- # sync 00:29:33.216 11:15:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:33.216 11:15:29 -- nvmf/common.sh@120 -- # set +e 00:29:33.216 11:15:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:33.216 11:15:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:33.216 rmmod nvme_tcp 00:29:33.476 rmmod nvme_fabrics 00:29:33.476 rmmod nvme_keyring 00:29:33.476 11:15:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:33.476 11:15:29 -- nvmf/common.sh@124 -- # set -e 00:29:33.476 11:15:29 -- nvmf/common.sh@125 -- # return 0 00:29:33.476 11:15:29 -- nvmf/common.sh@478 -- # '[' -n 544841 ']' 00:29:33.476 11:15:29 -- nvmf/common.sh@479 -- # killprocess 544841 00:29:33.476 11:15:29 -- common/autotest_common.sh@946 -- # '[' -z 544841 ']' 00:29:33.476 11:15:29 -- common/autotest_common.sh@950 -- # kill -0 544841 00:29:33.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (544841) - No such process 00:29:33.476 11:15:29 -- common/autotest_common.sh@973 -- # echo 'Process with pid 544841 is not found' 00:29:33.476 Process with pid 544841 is not found 00:29:33.476 11:15:29 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:33.476 11:15:29 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:36.769 Waiting for block devices as requested 00:29:36.769 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:36.769 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:36.769 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:36.769 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:36.769 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:36.769 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:37.029 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:37.029 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:37.029 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:37.289 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:37.289 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:37.549 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:37.549 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:37.549 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:37.549 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:37.809 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:37.809 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:38.069 11:15:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:38.069 11:15:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:38.069 11:15:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.069 11:15:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:38.069 11:15:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.069 11:15:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:38.069 11:15:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.978 11:15:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:40.240 00:29:40.240 real 0m50.671s 00:29:40.240 user 1m4.099s 00:29:40.240 sys 0m17.668s 00:29:40.240 11:15:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:40.240 11:15:36 -- common/autotest_common.sh@10 -- # set +x 00:29:40.240 ************************************ 00:29:40.240 END TEST nvmf_abort_qd_sizes 00:29:40.240 ************************************ 00:29:40.240 11:15:36 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:40.240 11:15:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:40.240 11:15:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:40.240 11:15:36 -- common/autotest_common.sh@10 -- # set +x 00:29:40.240 ************************************ 00:29:40.240 START TEST keyring_file 00:29:40.240 ************************************ 00:29:40.240 11:15:36 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:40.240 * Looking for test storage... 00:29:40.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:40.240 11:15:36 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:40.240 11:15:36 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.240 11:15:36 -- nvmf/common.sh@7 -- # uname -s 00:29:40.240 11:15:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.240 11:15:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.240 11:15:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.240 11:15:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.240 11:15:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.240 11:15:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.240 11:15:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.240 11:15:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.240 11:15:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.240 11:15:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.240 11:15:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.240 11:15:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.240 11:15:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.240 11:15:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.240 11:15:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.240 11:15:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.240 11:15:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.240 11:15:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.240 11:15:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.240 11:15:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.240 11:15:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.240 11:15:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.240 11:15:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.240 11:15:36 -- paths/export.sh@5 -- # export PATH 00:29:40.240 11:15:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.240 11:15:36 -- nvmf/common.sh@47 -- # : 0 00:29:40.240 11:15:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:40.240 11:15:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:40.240 11:15:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.240 11:15:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.240 11:15:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.240 11:15:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:40.240 11:15:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:40.240 11:15:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:40.240 11:15:36 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:40.240 11:15:36 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:40.240 11:15:36 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:40.240 11:15:36 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:40.240 11:15:36 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:40.240 11:15:36 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:40.240 11:15:36 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:40.240 11:15:36 -- keyring/common.sh@15 -- # local name key digest path 00:29:40.240 11:15:36 -- keyring/common.sh@17 -- # name=key0 00:29:40.240 11:15:36 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:40.240 11:15:36 -- keyring/common.sh@17 -- # digest=0 00:29:40.240 11:15:36 -- keyring/common.sh@18 -- # mktemp 00:29:40.240 11:15:36 -- keyring/common.sh@18 -- # path=/tmp/tmp.fryCtQjDCy 00:29:40.240 11:15:36 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:40.240 11:15:36 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:40.240 11:15:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:40.240 11:15:36 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:40.240 11:15:36 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:40.240 11:15:36 -- nvmf/common.sh@693 -- # digest=0 00:29:40.240 11:15:36 -- nvmf/common.sh@694 -- # python - 00:29:40.501 11:15:36 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fryCtQjDCy 00:29:40.501 11:15:36 -- keyring/common.sh@23 -- # echo /tmp/tmp.fryCtQjDCy 00:29:40.501 11:15:36 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fryCtQjDCy 00:29:40.501 11:15:36 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:40.501 11:15:36 -- keyring/common.sh@15 -- # local name key digest path 00:29:40.501 11:15:36 -- keyring/common.sh@17 -- # name=key1 00:29:40.501 11:15:36 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:40.501 11:15:36 -- keyring/common.sh@17 -- # digest=0 00:29:40.501 11:15:36 -- keyring/common.sh@18 -- # mktemp 00:29:40.501 11:15:36 -- keyring/common.sh@18 -- # path=/tmp/tmp.p54S1yiBk3 00:29:40.501 11:15:36 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:40.501 11:15:36 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:40.501 11:15:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:40.501 11:15:36 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:40.501 11:15:36 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:40.501 11:15:36 -- nvmf/common.sh@693 -- # digest=0 00:29:40.501 11:15:36 -- nvmf/common.sh@694 -- # python - 00:29:40.501 11:15:36 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.p54S1yiBk3 00:29:40.501 11:15:36 -- keyring/common.sh@23 -- # echo /tmp/tmp.p54S1yiBk3 00:29:40.501 11:15:36 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.p54S1yiBk3 00:29:40.501 11:15:36 -- keyring/file.sh@30 -- # tgtpid=555426 00:29:40.501 11:15:36 -- keyring/file.sh@32 -- # waitforlisten 555426 00:29:40.501 11:15:36 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:40.501 11:15:36 -- common/autotest_common.sh@827 -- # '[' -z 555426 ']' 00:29:40.501 11:15:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.501 11:15:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:40.501 11:15:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.501 11:15:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:40.501 11:15:36 -- common/autotest_common.sh@10 -- # set +x 00:29:40.501 [2024-05-15 11:15:37.041205] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:29:40.501 [2024-05-15 11:15:37.041277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555426 ] 00:29:40.501 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.501 [2024-05-15 11:15:37.105775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.760 [2024-05-15 11:15:37.180338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.330 11:15:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:41.330 11:15:37 -- common/autotest_common.sh@860 -- # return 0 00:29:41.330 11:15:37 -- keyring/file.sh@33 -- # rpc_cmd 00:29:41.330 11:15:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.330 11:15:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.330 [2024-05-15 11:15:37.803383] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.330 null0 00:29:41.330 [2024-05-15 11:15:37.835415] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:41.330 [2024-05-15 11:15:37.835463] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:41.330 [2024-05-15 11:15:37.835692] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:41.330 [2024-05-15 11:15:37.843449] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:41.330 11:15:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.330 11:15:37 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:41.330 11:15:37 -- common/autotest_common.sh@648 -- # local es=0 00:29:41.330 11:15:37 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:41.330 11:15:37 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:41.330 11:15:37 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.330 11:15:37 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:41.330 11:15:37 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.330 11:15:37 -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:41.330 11:15:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.330 11:15:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.330 [2024-05-15 11:15:37.859491] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:41.330 request: 00:29:41.330 { 00:29:41.330 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:41.330 "secure_channel": false, 00:29:41.330 "listen_address": { 00:29:41.330 "trtype": "tcp", 00:29:41.330 "traddr": "127.0.0.1", 00:29:41.330 "trsvcid": "4420" 00:29:41.330 }, 00:29:41.330 "method": "nvmf_subsystem_add_listener", 00:29:41.330 "req_id": 1 00:29:41.330 } 00:29:41.330 Got JSON-RPC error response 00:29:41.330 response: 00:29:41.330 { 00:29:41.330 "code": -32602, 00:29:41.330 "message": "Invalid parameters" 00:29:41.330 } 00:29:41.330 11:15:37 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:41.330 11:15:37 -- common/autotest_common.sh@651 -- # es=1 00:29:41.330 11:15:37 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:41.330 11:15:37 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:41.330 11:15:37 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:41.330 11:15:37 -- keyring/file.sh@46 -- # bperfpid=555622 00:29:41.330 11:15:37 -- keyring/file.sh@48 -- # waitforlisten 555622 /var/tmp/bperf.sock 00:29:41.330 11:15:37 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:41.330 11:15:37 -- common/autotest_common.sh@827 -- # '[' -z 555622 ']' 00:29:41.330 11:15:37 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:41.330 11:15:37 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:41.330 11:15:37 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:41.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:41.330 11:15:37 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:41.330 11:15:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.330 [2024-05-15 11:15:37.912841] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:29:41.330 [2024-05-15 11:15:37.912887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555622 ] 00:29:41.330 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.590 [2024-05-15 11:15:37.986987] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.590 [2024-05-15 11:15:38.051764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.159 11:15:38 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:42.159 11:15:38 -- common/autotest_common.sh@860 -- # return 0 00:29:42.159 11:15:38 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:42.159 11:15:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:42.419 11:15:38 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.p54S1yiBk3 00:29:42.419 11:15:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.p54S1yiBk3 00:29:42.419 11:15:38 -- keyring/file.sh@51 -- # get_key key0 00:29:42.419 11:15:38 -- keyring/file.sh@51 -- # jq -r .path 00:29:42.419 11:15:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.419 11:15:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:42.419 11:15:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.679 11:15:39 -- keyring/file.sh@51 -- # [[ /tmp/tmp.fryCtQjDCy == \/\t\m\p\/\t\m\p\.\f\r\y\C\t\Q\j\D\C\y ]] 00:29:42.679 11:15:39 -- keyring/file.sh@52 -- # get_key key1 00:29:42.679 11:15:39 -- keyring/file.sh@52 -- # jq -r .path 00:29:42.679 11:15:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.679 11:15:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.679 11:15:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:42.679 11:15:39 -- keyring/file.sh@52 -- # [[ /tmp/tmp.p54S1yiBk3 == \/\t\m\p\/\t\m\p\.\p\5\4\S\1\y\i\B\k\3 ]] 00:29:42.679 11:15:39 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:42.679 11:15:39 -- keyring/common.sh@12 -- # get_key key0 00:29:42.679 11:15:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:42.679 11:15:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.679 11:15:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.679 11:15:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:42.938 11:15:39 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:42.938 11:15:39 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:42.938 11:15:39 -- keyring/common.sh@12 -- # get_key key1 00:29:42.938 11:15:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:42.938 11:15:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.938 11:15:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:42.938 11:15:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.198 11:15:39 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:43.198 11:15:39 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:43.198 11:15:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:43.198 [2024-05-15 11:15:39.756246] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:43.198 nvme0n1 00:29:43.198 11:15:39 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:43.198 11:15:39 -- keyring/common.sh@12 -- # get_key key0 00:29:43.198 11:15:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.458 11:15:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:43.458 11:15:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.458 11:15:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.458 11:15:40 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:43.458 11:15:40 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:43.458 11:15:40 -- keyring/common.sh@12 -- # get_key key1 00:29:43.458 11:15:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.458 11:15:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.458 11:15:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:43.458 11:15:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.719 11:15:40 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:43.719 11:15:40 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:43.719 Running I/O for 1 seconds... 00:29:44.659 00:29:44.659 Latency(us) 00:29:44.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.659 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:44.659 nvme0n1 : 1.01 15386.77 60.10 0.00 0.00 8283.46 6389.76 19005.44 00:29:44.659 =================================================================================================================== 00:29:44.659 Total : 15386.77 60.10 0.00 0.00 8283.46 6389.76 19005.44 00:29:44.659 0 00:29:44.660 11:15:41 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:44.660 11:15:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:44.919 11:15:41 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:44.919 11:15:41 -- keyring/common.sh@12 -- # get_key key0 00:29:44.919 11:15:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:44.919 11:15:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:44.919 11:15:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.919 11:15:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:45.180 11:15:41 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:45.180 11:15:41 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:45.180 11:15:41 -- keyring/common.sh@12 -- # get_key key1 00:29:45.180 11:15:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:45.180 11:15:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:45.180 11:15:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.180 11:15:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:45.180 11:15:41 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:45.180 11:15:41 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:45.180 11:15:41 -- common/autotest_common.sh@648 -- # local es=0 00:29:45.180 11:15:41 -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:45.180 11:15:41 -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:45.180 11:15:41 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.180 11:15:41 -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:45.180 11:15:41 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.180 11:15:41 -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:45.180 11:15:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:45.440 [2024-05-15 11:15:41.909654] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:45.440 [2024-05-15 11:15:41.910413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c92a80 (107): Transport endpoint is not connected 00:29:45.440 [2024-05-15 11:15:41.911410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c92a80 (9): Bad file descriptor 00:29:45.440 [2024-05-15 11:15:41.912411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:45.440 [2024-05-15 11:15:41.912418] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:45.440 [2024-05-15 11:15:41.912424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:45.440 request: 00:29:45.440 { 00:29:45.440 "name": "nvme0", 00:29:45.440 "trtype": "tcp", 00:29:45.440 "traddr": "127.0.0.1", 00:29:45.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:45.440 "adrfam": "ipv4", 00:29:45.440 "trsvcid": "4420", 00:29:45.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.440 "psk": "key1", 00:29:45.440 "method": "bdev_nvme_attach_controller", 00:29:45.440 "req_id": 1 00:29:45.440 } 00:29:45.440 Got JSON-RPC error response 00:29:45.440 response: 00:29:45.440 { 00:29:45.440 "code": -32602, 00:29:45.440 "message": "Invalid parameters" 00:29:45.440 } 00:29:45.440 11:15:41 -- common/autotest_common.sh@651 -- # es=1 00:29:45.440 11:15:41 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:45.440 11:15:41 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:45.440 11:15:41 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:45.440 11:15:41 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:45.440 11:15:41 -- keyring/common.sh@12 -- # get_key key0 00:29:45.440 11:15:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:45.440 11:15:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:45.440 11:15:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:45.440 11:15:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.440 11:15:42 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:45.440 11:15:42 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:45.700 11:15:42 -- keyring/common.sh@12 -- # get_key key1 00:29:45.700 11:15:42 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:45.700 11:15:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:45.700 11:15:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:45.700 11:15:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.700 11:15:42 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:45.700 11:15:42 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:45.700 11:15:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:45.961 11:15:42 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:45.961 11:15:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:45.961 11:15:42 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:45.961 11:15:42 -- keyring/file.sh@77 -- # jq length 00:29:45.961 11:15:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.222 11:15:42 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:46.222 11:15:42 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.fryCtQjDCy 00:29:46.222 11:15:42 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:46.222 11:15:42 -- common/autotest_common.sh@648 -- # local es=0 00:29:46.222 11:15:42 -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:46.222 11:15:42 -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:46.222 11:15:42 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.222 11:15:42 -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:46.222 11:15:42 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.222 11:15:42 -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:46.222 11:15:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:46.482 [2024-05-15 11:15:42.893324] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fryCtQjDCy': 0100660 00:29:46.482 [2024-05-15 11:15:42.893341] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:46.482 request: 00:29:46.482 { 00:29:46.482 "name": "key0", 00:29:46.482 "path": "/tmp/tmp.fryCtQjDCy", 00:29:46.482 "method": "keyring_file_add_key", 00:29:46.482 "req_id": 1 00:29:46.482 } 00:29:46.482 Got JSON-RPC error response 00:29:46.482 response: 00:29:46.482 { 00:29:46.482 "code": -1, 00:29:46.482 "message": "Operation not permitted" 00:29:46.482 } 00:29:46.482 11:15:42 -- common/autotest_common.sh@651 -- # es=1 00:29:46.482 11:15:42 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:46.482 11:15:42 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:46.482 11:15:42 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:46.482 11:15:42 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.fryCtQjDCy 00:29:46.482 11:15:42 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:46.482 11:15:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fryCtQjDCy 00:29:46.482 11:15:43 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.fryCtQjDCy 00:29:46.482 11:15:43 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:46.482 11:15:43 -- keyring/common.sh@12 -- # get_key key0 00:29:46.482 11:15:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:46.482 11:15:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.482 11:15:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.482 11:15:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.742 11:15:43 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:46.742 11:15:43 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:46.742 11:15:43 -- common/autotest_common.sh@648 -- # local es=0 00:29:46.742 11:15:43 -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:46.742 11:15:43 -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:46.742 11:15:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.742 11:15:43 -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:46.742 11:15:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.742 11:15:43 -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:46.742 11:15:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:46.742 [2024-05-15 11:15:43.358496] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fryCtQjDCy': No such file or directory 00:29:46.742 [2024-05-15 11:15:43.358510] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:46.742 [2024-05-15 11:15:43.358526] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:46.742 [2024-05-15 11:15:43.358530] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:46.742 [2024-05-15 11:15:43.358536] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:46.742 request: 00:29:46.742 { 00:29:46.742 "name": "nvme0", 00:29:46.742 "trtype": "tcp", 00:29:46.742 "traddr": "127.0.0.1", 00:29:46.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.742 "adrfam": "ipv4", 00:29:46.742 "trsvcid": "4420", 00:29:46.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.742 "psk": "key0", 00:29:46.742 "method": "bdev_nvme_attach_controller", 00:29:46.742 "req_id": 1 00:29:46.742 } 00:29:46.742 Got JSON-RPC error response 00:29:46.742 response: 00:29:46.742 { 00:29:46.742 "code": -19, 00:29:46.742 "message": "No such device" 00:29:46.742 } 00:29:46.742 11:15:43 -- common/autotest_common.sh@651 -- # es=1 00:29:46.742 11:15:43 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:46.742 11:15:43 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:46.742 11:15:43 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:46.742 11:15:43 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:46.742 11:15:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:47.002 11:15:43 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:47.002 11:15:43 -- keyring/common.sh@15 -- # local name key digest path 00:29:47.002 11:15:43 -- keyring/common.sh@17 -- # name=key0 00:29:47.002 11:15:43 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:47.002 11:15:43 -- keyring/common.sh@17 -- # digest=0 00:29:47.002 11:15:43 -- keyring/common.sh@18 -- # mktemp 00:29:47.002 11:15:43 -- keyring/common.sh@18 -- # path=/tmp/tmp.Rd1LYT988n 00:29:47.002 11:15:43 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:47.002 11:15:43 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:47.002 11:15:43 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:47.002 11:15:43 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:47.002 11:15:43 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:47.002 11:15:43 -- nvmf/common.sh@693 -- # digest=0 00:29:47.002 11:15:43 -- nvmf/common.sh@694 -- # python - 00:29:47.002 11:15:43 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Rd1LYT988n 00:29:47.002 11:15:43 -- keyring/common.sh@23 -- # echo /tmp/tmp.Rd1LYT988n 00:29:47.002 11:15:43 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Rd1LYT988n 00:29:47.002 11:15:43 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Rd1LYT988n 00:29:47.002 11:15:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Rd1LYT988n 00:29:47.262 11:15:43 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.262 11:15:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.522 nvme0n1 00:29:47.522 11:15:43 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:47.522 11:15:44 -- keyring/common.sh@12 -- # get_key key0 00:29:47.522 11:15:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:47.522 11:15:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:47.522 11:15:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:47.522 11:15:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.522 11:15:44 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:47.522 11:15:44 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:47.522 11:15:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:47.782 11:15:44 -- keyring/file.sh@101 -- # jq -r .removed 00:29:47.782 11:15:44 -- keyring/file.sh@101 -- # get_key key0 00:29:47.782 11:15:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:47.782 11:15:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:47.782 11:15:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.042 11:15:44 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:48.042 11:15:44 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:48.042 11:15:44 -- keyring/common.sh@12 -- # get_key key0 00:29:48.042 11:15:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:48.042 11:15:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.042 11:15:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.042 11:15:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:48.042 11:15:44 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:48.042 11:15:44 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:48.042 11:15:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:48.303 11:15:44 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:48.303 11:15:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.303 11:15:44 -- keyring/file.sh@104 -- # jq length 00:29:48.563 11:15:44 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:48.563 11:15:44 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Rd1LYT988n 00:29:48.563 11:15:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Rd1LYT988n 00:29:48.563 11:15:45 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.p54S1yiBk3 00:29:48.563 11:15:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.p54S1yiBk3 00:29:48.822 11:15:45 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:48.822 11:15:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:49.082 nvme0n1 00:29:49.082 11:15:45 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:49.082 11:15:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:49.342 11:15:45 -- keyring/file.sh@112 -- # config='{ 00:29:49.342 "subsystems": [ 00:29:49.342 { 00:29:49.342 "subsystem": "keyring", 00:29:49.342 "config": [ 00:29:49.342 { 00:29:49.342 "method": "keyring_file_add_key", 00:29:49.342 "params": { 00:29:49.342 "name": "key0", 00:29:49.342 "path": "/tmp/tmp.Rd1LYT988n" 00:29:49.342 } 00:29:49.342 }, 00:29:49.342 { 00:29:49.342 "method": "keyring_file_add_key", 00:29:49.342 "params": { 00:29:49.342 "name": "key1", 00:29:49.342 "path": "/tmp/tmp.p54S1yiBk3" 00:29:49.342 } 00:29:49.342 } 00:29:49.342 ] 00:29:49.342 }, 00:29:49.342 { 00:29:49.342 "subsystem": "iobuf", 00:29:49.342 "config": [ 00:29:49.342 { 00:29:49.342 "method": "iobuf_set_options", 00:29:49.342 "params": { 00:29:49.342 "small_pool_count": 8192, 00:29:49.342 "large_pool_count": 1024, 00:29:49.342 "small_bufsize": 8192, 00:29:49.342 "large_bufsize": 135168 00:29:49.342 } 00:29:49.342 } 00:29:49.342 ] 00:29:49.342 }, 00:29:49.342 { 00:29:49.342 "subsystem": "sock", 00:29:49.342 "config": [ 00:29:49.342 { 00:29:49.342 "method": "sock_impl_set_options", 00:29:49.342 "params": { 00:29:49.342 "impl_name": "posix", 00:29:49.342 "recv_buf_size": 2097152, 00:29:49.342 "send_buf_size": 2097152, 00:29:49.342 "enable_recv_pipe": true, 00:29:49.342 "enable_quickack": false, 00:29:49.342 "enable_placement_id": 0, 00:29:49.342 "enable_zerocopy_send_server": true, 00:29:49.342 "enable_zerocopy_send_client": false, 00:29:49.342 "zerocopy_threshold": 0, 00:29:49.342 "tls_version": 0, 00:29:49.342 "enable_ktls": false 00:29:49.342 } 00:29:49.342 }, 00:29:49.342 { 00:29:49.342 "method": "sock_impl_set_options", 00:29:49.342 "params": { 00:29:49.342 "impl_name": "ssl", 00:29:49.342 "recv_buf_size": 4096, 00:29:49.342 "send_buf_size": 4096, 00:29:49.342 "enable_recv_pipe": true, 00:29:49.342 "enable_quickack": false, 00:29:49.342 "enable_placement_id": 0, 00:29:49.342 "enable_zerocopy_send_server": true, 00:29:49.342 "enable_zerocopy_send_client": false, 00:29:49.342 "zerocopy_threshold": 0, 00:29:49.342 "tls_version": 0, 00:29:49.342 "enable_ktls": false 00:29:49.342 } 00:29:49.342 } 00:29:49.342 ] 00:29:49.342 }, 00:29:49.342 { 00:29:49.342 "subsystem": "vmd", 00:29:49.342 "config": [] 00:29:49.342 }, 00:29:49.343 { 00:29:49.343 "subsystem": "accel", 00:29:49.343 "config": [ 00:29:49.343 { 00:29:49.343 "method": "accel_set_options", 00:29:49.343 "params": { 00:29:49.343 "small_cache_size": 128, 00:29:49.343 "large_cache_size": 16, 00:29:49.343 "task_count": 2048, 00:29:49.343 "sequence_count": 2048, 00:29:49.343 "buf_count": 2048 00:29:49.343 } 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "subsystem": "bdev", 00:29:49.343 "config": [ 00:29:49.343 { 00:29:49.343 "method": "bdev_set_options", 00:29:49.343 "params": { 00:29:49.343 "bdev_io_pool_size": 65535, 00:29:49.343 "bdev_io_cache_size": 256, 00:29:49.343 "bdev_auto_examine": true, 00:29:49.343 "iobuf_small_cache_size": 128, 00:29:49.343 "iobuf_large_cache_size": 16 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "bdev_raid_set_options", 00:29:49.343 "params": { 00:29:49.343 "process_window_size_kb": 1024 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "bdev_iscsi_set_options", 00:29:49.343 "params": { 00:29:49.343 "timeout_sec": 30 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "bdev_nvme_set_options", 00:29:49.343 "params": { 00:29:49.343 "action_on_timeout": "none", 00:29:49.343 "timeout_us": 0, 00:29:49.343 "timeout_admin_us": 0, 00:29:49.343 "keep_alive_timeout_ms": 10000, 00:29:49.343 "arbitration_burst": 0, 00:29:49.343 "low_priority_weight": 0, 00:29:49.343 "medium_priority_weight": 0, 00:29:49.343 "high_priority_weight": 0, 00:29:49.343 "nvme_adminq_poll_period_us": 10000, 00:29:49.343 "nvme_ioq_poll_period_us": 0, 00:29:49.343 "io_queue_requests": 512, 00:29:49.343 "delay_cmd_submit": true, 00:29:49.343 "transport_retry_count": 4, 00:29:49.343 "bdev_retry_count": 3, 00:29:49.343 "transport_ack_timeout": 0, 00:29:49.343 "ctrlr_loss_timeout_sec": 0, 00:29:49.343 "reconnect_delay_sec": 0, 00:29:49.343 "fast_io_fail_timeout_sec": 0, 00:29:49.343 "disable_auto_failback": false, 00:29:49.343 "generate_uuids": false, 00:29:49.343 "transport_tos": 0, 00:29:49.343 "nvme_error_stat": false, 00:29:49.343 "rdma_srq_size": 0, 00:29:49.343 "io_path_stat": false, 00:29:49.343 "allow_accel_sequence": false, 00:29:49.343 "rdma_max_cq_size": 0, 00:29:49.343 "rdma_cm_event_timeout_ms": 0, 00:29:49.343 "dhchap_digests": [ 00:29:49.343 "sha256", 00:29:49.343 "sha384", 00:29:49.343 "sha512" 00:29:49.343 ], 00:29:49.343 "dhchap_dhgroups": [ 00:29:49.343 "null", 00:29:49.343 "ffdhe2048", 00:29:49.343 "ffdhe3072", 00:29:49.343 "ffdhe4096", 00:29:49.343 "ffdhe6144", 00:29:49.343 "ffdhe8192" 00:29:49.343 ] 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "bdev_nvme_attach_controller", 00:29:49.343 "params": { 00:29:49.343 "name": "nvme0", 00:29:49.343 "trtype": "TCP", 00:29:49.343 "adrfam": "IPv4", 00:29:49.343 "traddr": "127.0.0.1", 00:29:49.343 "trsvcid": "4420", 00:29:49.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.343 "prchk_reftag": false, 00:29:49.343 "prchk_guard": false, 00:29:49.343 "ctrlr_loss_timeout_sec": 0, 00:29:49.343 "reconnect_delay_sec": 0, 00:29:49.343 "fast_io_fail_timeout_sec": 0, 00:29:49.343 "psk": "key0", 00:29:49.343 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:49.343 "hdgst": false, 00:29:49.343 "ddgst": false 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "bdev_nvme_set_hotplug", 00:29:49.343 "params": { 00:29:49.343 "period_us": 100000, 00:29:49.343 "enable": false 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "bdev_wait_for_examine" 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "subsystem": "nbd", 00:29:49.343 "config": [] 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 }' 00:29:49.343 11:15:45 -- keyring/file.sh@114 -- # killprocess 555622 00:29:49.343 11:15:45 -- common/autotest_common.sh@946 -- # '[' -z 555622 ']' 00:29:49.343 11:15:45 -- common/autotest_common.sh@950 -- # kill -0 555622 00:29:49.343 11:15:45 -- common/autotest_common.sh@951 -- # uname 00:29:49.343 11:15:45 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:49.343 11:15:45 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 555622 00:29:49.343 11:15:45 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:49.343 11:15:45 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:49.343 11:15:45 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 555622' 00:29:49.343 killing process with pid 555622 00:29:49.343 11:15:45 -- common/autotest_common.sh@965 -- # kill 555622 00:29:49.343 Received shutdown signal, test time was about 1.000000 seconds 00:29:49.343 00:29:49.343 Latency(us) 00:29:49.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.343 =================================================================================================================== 00:29:49.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.343 11:15:45 -- common/autotest_common.sh@970 -- # wait 555622 00:29:49.343 11:15:45 -- keyring/file.sh@117 -- # bperfpid=557285 00:29:49.343 11:15:45 -- keyring/file.sh@119 -- # waitforlisten 557285 /var/tmp/bperf.sock 00:29:49.343 11:15:45 -- common/autotest_common.sh@827 -- # '[' -z 557285 ']' 00:29:49.343 11:15:45 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:49.343 11:15:45 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:49.343 11:15:45 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:49.343 11:15:45 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:49.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:49.343 11:15:45 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:49.343 11:15:45 -- keyring/file.sh@115 -- # echo '{ 00:29:49.343 "subsystems": [ 00:29:49.343 { 00:29:49.343 "subsystem": "keyring", 00:29:49.343 "config": [ 00:29:49.343 { 00:29:49.343 "method": "keyring_file_add_key", 00:29:49.343 "params": { 00:29:49.343 "name": "key0", 00:29:49.343 "path": "/tmp/tmp.Rd1LYT988n" 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "keyring_file_add_key", 00:29:49.343 "params": { 00:29:49.343 "name": "key1", 00:29:49.343 "path": "/tmp/tmp.p54S1yiBk3" 00:29:49.343 } 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "subsystem": "iobuf", 00:29:49.343 "config": [ 00:29:49.343 { 00:29:49.343 "method": "iobuf_set_options", 00:29:49.343 "params": { 00:29:49.343 "small_pool_count": 8192, 00:29:49.343 "large_pool_count": 1024, 00:29:49.343 "small_bufsize": 8192, 00:29:49.343 "large_bufsize": 135168 00:29:49.343 } 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "subsystem": "sock", 00:29:49.343 "config": [ 00:29:49.343 { 00:29:49.343 "method": "sock_impl_set_options", 00:29:49.343 "params": { 00:29:49.343 "impl_name": "posix", 00:29:49.343 "recv_buf_size": 2097152, 00:29:49.343 "send_buf_size": 2097152, 00:29:49.343 "enable_recv_pipe": true, 00:29:49.343 "enable_quickack": false, 00:29:49.343 "enable_placement_id": 0, 00:29:49.343 "enable_zerocopy_send_server": true, 00:29:49.343 "enable_zerocopy_send_client": false, 00:29:49.343 "zerocopy_threshold": 0, 00:29:49.343 "tls_version": 0, 00:29:49.343 "enable_ktls": false 00:29:49.343 } 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "sock_impl_set_options", 00:29:49.343 "params": { 00:29:49.343 "impl_name": "ssl", 00:29:49.343 "recv_buf_size": 4096, 00:29:49.343 "send_buf_size": 4096, 00:29:49.343 "enable_recv_pipe": true, 00:29:49.343 "enable_quickack": false, 00:29:49.343 "enable_placement_id": 0, 00:29:49.343 "enable_zerocopy_send_server": true, 00:29:49.343 "enable_zerocopy_send_client": false, 00:29:49.343 "zerocopy_threshold": 0, 00:29:49.343 "tls_version": 0, 00:29:49.343 "enable_ktls": false 00:29:49.343 } 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "subsystem": "vmd", 00:29:49.343 "config": [] 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "subsystem": "accel", 00:29:49.343 "config": [ 00:29:49.343 { 00:29:49.343 "method": "accel_set_options", 00:29:49.343 "params": { 00:29:49.343 "small_cache_size": 128, 00:29:49.343 "large_cache_size": 16, 00:29:49.343 "task_count": 2048, 00:29:49.344 "sequence_count": 2048, 00:29:49.344 "buf_count": 2048 00:29:49.344 } 00:29:49.344 } 00:29:49.344 ] 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "subsystem": "bdev", 00:29:49.344 "config": [ 00:29:49.344 { 00:29:49.344 "method": "bdev_set_options", 00:29:49.344 "params": { 00:29:49.344 "bdev_io_pool_size": 65535, 00:29:49.344 "bdev_io_cache_size": 256, 00:29:49.344 "bdev_auto_examine": true, 00:29:49.344 "iobuf_small_cache_size": 128, 00:29:49.344 "iobuf_large_cache_size": 16 00:29:49.344 } 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "method": "bdev_raid_set_options", 00:29:49.344 "params": { 00:29:49.344 "process_window_size_kb": 1024 00:29:49.344 } 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "method": "bdev_iscsi_set_options", 00:29:49.344 "params": { 00:29:49.344 "timeout_sec": 30 00:29:49.344 } 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "method": "bdev_nvme_set_options", 00:29:49.344 "params": { 00:29:49.344 "action_on_timeout": "none", 00:29:49.344 "timeout_us": 0, 00:29:49.344 "timeout_admin_us": 0, 00:29:49.344 "keep_alive_timeout_ms": 10000, 00:29:49.344 "arbitration_burst": 0, 00:29:49.344 "low_priority_weight": 0, 00:29:49.344 "medium_priority_weight": 0, 00:29:49.344 "high_priority_weight": 0, 00:29:49.344 "nvme_adminq_poll_period_us": 10000, 00:29:49.344 "nvme_ioq_poll_period_us": 0, 00:29:49.344 "io_queue_requests": 512, 00:29:49.344 "delay_cmd_submit": true, 00:29:49.344 "transport_retry_count": 4, 00:29:49.344 "bdev_retry_count": 3, 00:29:49.344 "transport_ack_timeout": 0, 00:29:49.344 "ctrlr_loss_timeout_sec": 0, 00:29:49.344 "reconnect_delay_sec": 0, 00:29:49.344 "fast_io_fail_timeout_sec": 0, 00:29:49.344 "disable_auto_failback": false, 00:29:49.344 "generate_uuids": false, 00:29:49.344 "transport_tos": 0, 00:29:49.344 "nvme_error_stat": false, 00:29:49.344 "rdma_srq_size": 0, 00:29:49.344 "io_path_stat": false, 00:29:49.344 "allow_accel_sequence": false, 00:29:49.344 "rdma_max_cq_size": 0, 00:29:49.344 "rdma_cm_event_timeout_ms": 0, 00:29:49.344 "dhchap_digests": [ 00:29:49.344 "sha256", 00:29:49.344 "sha384 11:15:45 -- common/autotest_common.sh@10 -- # set +x 00:29:49.344 ", 00:29:49.344 "sha512" 00:29:49.344 ], 00:29:49.344 "dhchap_dhgroups": [ 00:29:49.344 "null", 00:29:49.344 "ffdhe2048", 00:29:49.344 "ffdhe3072", 00:29:49.344 "ffdhe4096", 00:29:49.344 "ffdhe6144", 00:29:49.344 "ffdhe8192" 00:29:49.344 ] 00:29:49.344 } 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "method": "bdev_nvme_attach_controller", 00:29:49.344 "params": { 00:29:49.344 "name": "nvme0", 00:29:49.344 "trtype": "TCP", 00:29:49.344 "adrfam": "IPv4", 00:29:49.344 "traddr": "127.0.0.1", 00:29:49.344 "trsvcid": "4420", 00:29:49.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.344 "prchk_reftag": false, 00:29:49.344 "prchk_guard": false, 00:29:49.344 "ctrlr_loss_timeout_sec": 0, 00:29:49.344 "reconnect_delay_sec": 0, 00:29:49.344 "fast_io_fail_timeout_sec": 0, 00:29:49.344 "psk": "key0", 00:29:49.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:49.344 "hdgst": false, 00:29:49.344 "ddgst": false 00:29:49.344 } 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "method": "bdev_nvme_set_hotplug", 00:29:49.344 "params": { 00:29:49.344 "period_us": 100000, 00:29:49.344 "enable": false 00:29:49.344 } 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "method": "bdev_wait_for_examine" 00:29:49.344 } 00:29:49.344 ] 00:29:49.344 }, 00:29:49.344 { 00:29:49.344 "subsystem": "nbd", 00:29:49.344 "config": [] 00:29:49.344 } 00:29:49.344 ] 00:29:49.344 }' 00:29:49.344 [2024-05-15 11:15:45.949874] Starting SPDK v24.05-pre git sha1 7d4b19830 / DPDK 23.11.0 initialization... 00:29:49.344 [2024-05-15 11:15:45.949928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557285 ] 00:29:49.344 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.604 [2024-05-15 11:15:46.023801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.604 [2024-05-15 11:15:46.076779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.604 [2024-05-15 11:15:46.210441] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:50.174 11:15:46 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:50.174 11:15:46 -- common/autotest_common.sh@860 -- # return 0 00:29:50.174 11:15:46 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:50.174 11:15:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.174 11:15:46 -- keyring/file.sh@120 -- # jq length 00:29:50.434 11:15:46 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:50.434 11:15:46 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:50.434 11:15:46 -- keyring/common.sh@12 -- # get_key key0 00:29:50.434 11:15:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.434 11:15:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.434 11:15:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:50.434 11:15:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.434 11:15:47 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:50.434 11:15:47 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:50.434 11:15:47 -- keyring/common.sh@12 -- # get_key key1 00:29:50.434 11:15:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.434 11:15:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:50.434 11:15:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.434 11:15:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.694 11:15:47 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:50.694 11:15:47 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:50.694 11:15:47 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:50.694 11:15:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:50.694 11:15:47 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:50.694 11:15:47 -- keyring/file.sh@1 -- # cleanup 00:29:50.694 11:15:47 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Rd1LYT988n /tmp/tmp.p54S1yiBk3 00:29:50.694 11:15:47 -- keyring/file.sh@20 -- # killprocess 557285 00:29:50.694 11:15:47 -- common/autotest_common.sh@946 -- # '[' -z 557285 ']' 00:29:50.694 11:15:47 -- common/autotest_common.sh@950 -- # kill -0 557285 00:29:50.694 11:15:47 -- common/autotest_common.sh@951 -- # uname 00:29:50.954 11:15:47 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:50.954 11:15:47 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 557285 00:29:50.954 11:15:47 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:50.954 11:15:47 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:50.954 11:15:47 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 557285' 00:29:50.954 killing process with pid 557285 00:29:50.954 11:15:47 -- common/autotest_common.sh@965 -- # kill 557285 00:29:50.954 Received shutdown signal, test time was about 1.000000 seconds 00:29:50.954 00:29:50.954 Latency(us) 00:29:50.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.954 =================================================================================================================== 00:29:50.954 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:50.954 11:15:47 -- common/autotest_common.sh@970 -- # wait 557285 00:29:50.954 11:15:47 -- keyring/file.sh@21 -- # killprocess 555426 00:29:50.954 11:15:47 -- common/autotest_common.sh@946 -- # '[' -z 555426 ']' 00:29:50.954 11:15:47 -- common/autotest_common.sh@950 -- # kill -0 555426 00:29:50.954 11:15:47 -- common/autotest_common.sh@951 -- # uname 00:29:50.954 11:15:47 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:50.954 11:15:47 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 555426 00:29:50.954 11:15:47 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:50.954 11:15:47 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:50.954 11:15:47 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 555426' 00:29:50.954 killing process with pid 555426 00:29:50.954 11:15:47 -- common/autotest_common.sh@965 -- # kill 555426 00:29:50.954 [2024-05-15 11:15:47.567301] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:50.954 [2024-05-15 11:15:47.567335] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:50.954 11:15:47 -- common/autotest_common.sh@970 -- # wait 555426 00:29:51.215 00:29:51.215 real 0m11.059s 00:29:51.215 user 0m26.623s 00:29:51.215 sys 0m2.433s 00:29:51.215 11:15:47 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:51.215 11:15:47 -- common/autotest_common.sh@10 -- # set +x 00:29:51.215 ************************************ 00:29:51.215 END TEST keyring_file 00:29:51.215 ************************************ 00:29:51.215 11:15:47 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:51.215 11:15:47 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:51.215 11:15:47 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:51.215 11:15:47 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:51.215 11:15:47 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:51.215 11:15:47 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:51.215 11:15:47 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:51.215 11:15:47 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:51.215 11:15:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:51.215 11:15:47 -- common/autotest_common.sh@10 -- # set +x 00:29:51.215 11:15:47 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:51.215 11:15:47 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:29:51.215 11:15:47 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:29:51.215 11:15:47 -- common/autotest_common.sh@10 -- # set +x 00:29:59.351 INFO: APP EXITING 00:29:59.351 INFO: killing all VMs 00:29:59.351 INFO: killing vhost app 00:29:59.351 INFO: EXIT DONE 00:30:01.895 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:65:00.0 (144d a80a): Already using the nvme driver 00:30:02.155 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:30:02.155 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:30:02.415 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:30:02.415 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:30:02.415 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:30:02.415 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:30:02.415 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:30:05.711 Cleaning 00:30:05.711 Removing: /var/run/dpdk/spdk0/config 00:30:05.711 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:05.972 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:05.972 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:05.972 Removing: /var/run/dpdk/spdk1/config 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:05.972 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:05.972 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:05.972 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:05.972 Removing: /var/run/dpdk/spdk2/config 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:05.972 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:05.972 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:05.972 Removing: /var/run/dpdk/spdk3/config 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:05.972 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:05.972 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:05.972 Removing: /var/run/dpdk/spdk4/config 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:05.972 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:05.972 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:05.972 Removing: /dev/shm/bdev_svc_trace.1 00:30:05.972 Removing: /dev/shm/nvmf_trace.0 00:30:05.972 Removing: /dev/shm/spdk_tgt_trace.pid129168 00:30:05.972 Removing: /var/run/dpdk/spdk0 00:30:05.972 Removing: /var/run/dpdk/spdk1 00:30:05.972 Removing: /var/run/dpdk/spdk2 00:30:05.972 Removing: /var/run/dpdk/spdk3 00:30:06.232 Removing: /var/run/dpdk/spdk4 00:30:06.232 Removing: /var/run/dpdk/spdk_pid127690 00:30:06.232 Removing: /var/run/dpdk/spdk_pid129168 00:30:06.232 Removing: /var/run/dpdk/spdk_pid129895 00:30:06.232 Removing: /var/run/dpdk/spdk_pid131158 00:30:06.232 Removing: /var/run/dpdk/spdk_pid131298 00:30:06.232 Removing: /var/run/dpdk/spdk_pid133026 00:30:06.232 Removing: /var/run/dpdk/spdk_pid133039 00:30:06.232 Removing: /var/run/dpdk/spdk_pid133471 00:30:06.232 Removing: /var/run/dpdk/spdk_pid134459 00:30:06.232 Removing: /var/run/dpdk/spdk_pid135062 00:30:06.232 Removing: /var/run/dpdk/spdk_pid135448 00:30:06.232 Removing: /var/run/dpdk/spdk_pid135833 00:30:06.232 Removing: /var/run/dpdk/spdk_pid136190 00:30:06.232 Removing: /var/run/dpdk/spdk_pid136425 00:30:06.232 Removing: /var/run/dpdk/spdk_pid136672 00:30:06.232 Removing: /var/run/dpdk/spdk_pid137022 00:30:06.232 Removing: /var/run/dpdk/spdk_pid137405 00:30:06.232 Removing: /var/run/dpdk/spdk_pid138517 00:30:06.232 Removing: /var/run/dpdk/spdk_pid142056 00:30:06.232 Removing: /var/run/dpdk/spdk_pid142432 00:30:06.232 Removing: /var/run/dpdk/spdk_pid142793 00:30:06.232 Removing: /var/run/dpdk/spdk_pid142807 00:30:06.232 Removing: /var/run/dpdk/spdk_pid143326 00:30:06.232 Removing: /var/run/dpdk/spdk_pid143512 00:30:06.232 Removing: /var/run/dpdk/spdk_pid143891 00:30:06.232 Removing: /var/run/dpdk/spdk_pid144222 00:30:06.232 Removing: /var/run/dpdk/spdk_pid144514 00:30:06.232 Removing: /var/run/dpdk/spdk_pid144600 00:30:06.232 Removing: /var/run/dpdk/spdk_pid144958 00:30:06.232 Removing: /var/run/dpdk/spdk_pid144963 00:30:06.232 Removing: /var/run/dpdk/spdk_pid145404 00:30:06.232 Removing: /var/run/dpdk/spdk_pid145752 00:30:06.232 Removing: /var/run/dpdk/spdk_pid146150 00:30:06.232 Removing: /var/run/dpdk/spdk_pid146324 00:30:06.232 Removing: /var/run/dpdk/spdk_pid146531 00:30:06.232 Removing: /var/run/dpdk/spdk_pid146613 00:30:06.232 Removing: /var/run/dpdk/spdk_pid146961 00:30:06.232 Removing: /var/run/dpdk/spdk_pid147316 00:30:06.232 Removing: /var/run/dpdk/spdk_pid147538 00:30:06.232 Removing: /var/run/dpdk/spdk_pid147730 00:30:06.232 Removing: /var/run/dpdk/spdk_pid148057 00:30:06.232 Removing: /var/run/dpdk/spdk_pid148404 00:30:06.233 Removing: /var/run/dpdk/spdk_pid148756 00:30:06.233 Removing: /var/run/dpdk/spdk_pid149062 00:30:06.233 Removing: /var/run/dpdk/spdk_pid149240 00:30:06.233 Removing: /var/run/dpdk/spdk_pid149493 00:30:06.233 Removing: /var/run/dpdk/spdk_pid149851 00:30:06.233 Removing: /var/run/dpdk/spdk_pid150198 00:30:06.233 Removing: /var/run/dpdk/spdk_pid150547 00:30:06.233 Removing: /var/run/dpdk/spdk_pid150746 00:30:06.233 Removing: /var/run/dpdk/spdk_pid150952 00:30:06.233 Removing: /var/run/dpdk/spdk_pid151287 00:30:06.233 Removing: /var/run/dpdk/spdk_pid151643 00:30:06.233 Removing: /var/run/dpdk/spdk_pid151998 00:30:06.233 Removing: /var/run/dpdk/spdk_pid152280 00:30:06.233 Removing: /var/run/dpdk/spdk_pid152522 00:30:06.233 Removing: /var/run/dpdk/spdk_pid152765 00:30:06.233 Removing: /var/run/dpdk/spdk_pid153179 00:30:06.233 Removing: /var/run/dpdk/spdk_pid157529 00:30:06.233 Removing: /var/run/dpdk/spdk_pid211082 00:30:06.233 Removing: /var/run/dpdk/spdk_pid216152 00:30:06.233 Removing: /var/run/dpdk/spdk_pid228159 00:30:06.233 Removing: /var/run/dpdk/spdk_pid234639 00:30:06.233 Removing: /var/run/dpdk/spdk_pid239866 00:30:06.233 Removing: /var/run/dpdk/spdk_pid240776 00:30:06.493 Removing: /var/run/dpdk/spdk_pid254458 00:30:06.493 Removing: /var/run/dpdk/spdk_pid254461 00:30:06.493 Removing: /var/run/dpdk/spdk_pid255483 00:30:06.493 Removing: /var/run/dpdk/spdk_pid256528 00:30:06.493 Removing: /var/run/dpdk/spdk_pid257613 00:30:06.493 Removing: /var/run/dpdk/spdk_pid258254 00:30:06.493 Removing: /var/run/dpdk/spdk_pid258403 00:30:06.493 Removing: /var/run/dpdk/spdk_pid258603 00:30:06.493 Removing: /var/run/dpdk/spdk_pid258818 00:30:06.493 Removing: /var/run/dpdk/spdk_pid258823 00:30:06.493 Removing: /var/run/dpdk/spdk_pid259827 00:30:06.493 Removing: /var/run/dpdk/spdk_pid260832 00:30:06.493 Removing: /var/run/dpdk/spdk_pid261901 00:30:06.493 Removing: /var/run/dpdk/spdk_pid262553 00:30:06.493 Removing: /var/run/dpdk/spdk_pid262687 00:30:06.493 Removing: /var/run/dpdk/spdk_pid262947 00:30:06.493 Removing: /var/run/dpdk/spdk_pid264298 00:30:06.493 Removing: /var/run/dpdk/spdk_pid265677 00:30:06.494 Removing: /var/run/dpdk/spdk_pid275803 00:30:06.494 Removing: /var/run/dpdk/spdk_pid276249 00:30:06.494 Removing: /var/run/dpdk/spdk_pid281261 00:30:06.494 Removing: /var/run/dpdk/spdk_pid288697 00:30:06.494 Removing: /var/run/dpdk/spdk_pid291768 00:30:06.494 Removing: /var/run/dpdk/spdk_pid303924 00:30:06.494 Removing: /var/run/dpdk/spdk_pid314600 00:30:06.494 Removing: /var/run/dpdk/spdk_pid316777 00:30:06.494 Removing: /var/run/dpdk/spdk_pid317953 00:30:06.494 Removing: /var/run/dpdk/spdk_pid338324 00:30:06.494 Removing: /var/run/dpdk/spdk_pid343325 00:30:06.494 Removing: /var/run/dpdk/spdk_pid348530 00:30:06.494 Removing: /var/run/dpdk/spdk_pid350534 00:30:06.494 Removing: /var/run/dpdk/spdk_pid352864 00:30:06.494 Removing: /var/run/dpdk/spdk_pid353038 00:30:06.494 Removing: /var/run/dpdk/spdk_pid353235 00:30:06.494 Removing: /var/run/dpdk/spdk_pid353577 00:30:06.494 Removing: /var/run/dpdk/spdk_pid353982 00:30:06.494 Removing: /var/run/dpdk/spdk_pid356330 00:30:06.494 Removing: /var/run/dpdk/spdk_pid357429 00:30:06.494 Removing: /var/run/dpdk/spdk_pid357820 00:30:06.494 Removing: /var/run/dpdk/spdk_pid360542 00:30:06.494 Removing: /var/run/dpdk/spdk_pid361269 00:30:06.494 Removing: /var/run/dpdk/spdk_pid362011 00:30:06.494 Removing: /var/run/dpdk/spdk_pid367029 00:30:06.494 Removing: /var/run/dpdk/spdk_pid379276 00:30:06.494 Removing: /var/run/dpdk/spdk_pid384109 00:30:06.494 Removing: /var/run/dpdk/spdk_pid392191 00:30:06.494 Removing: /var/run/dpdk/spdk_pid393691 00:30:06.494 Removing: /var/run/dpdk/spdk_pid395344 00:30:06.494 Removing: /var/run/dpdk/spdk_pid400625 00:30:06.494 Removing: /var/run/dpdk/spdk_pid405330 00:30:06.494 Removing: /var/run/dpdk/spdk_pid414299 00:30:06.494 Removing: /var/run/dpdk/spdk_pid414390 00:30:06.494 Removing: /var/run/dpdk/spdk_pid419313 00:30:06.494 Removing: /var/run/dpdk/spdk_pid419467 00:30:06.494 Removing: /var/run/dpdk/spdk_pid419783 00:30:06.494 Removing: /var/run/dpdk/spdk_pid420247 00:30:06.494 Removing: /var/run/dpdk/spdk_pid420312 00:30:06.494 Removing: /var/run/dpdk/spdk_pid425484 00:30:06.494 Removing: /var/run/dpdk/spdk_pid426306 00:30:06.494 Removing: /var/run/dpdk/spdk_pid431477 00:30:06.494 Removing: /var/run/dpdk/spdk_pid434733 00:30:06.494 Removing: /var/run/dpdk/spdk_pid441204 00:30:06.494 Removing: /var/run/dpdk/spdk_pid448299 00:30:06.494 Removing: /var/run/dpdk/spdk_pid458192 00:30:06.494 Removing: /var/run/dpdk/spdk_pid466511 00:30:06.494 Removing: /var/run/dpdk/spdk_pid466513 00:30:06.494 Removing: /var/run/dpdk/spdk_pid490400 00:30:06.754 Removing: /var/run/dpdk/spdk_pid491090 00:30:06.754 Removing: /var/run/dpdk/spdk_pid491769 00:30:06.754 Removing: /var/run/dpdk/spdk_pid492462 00:30:06.754 Removing: /var/run/dpdk/spdk_pid493515 00:30:06.754 Removing: /var/run/dpdk/spdk_pid494204 00:30:06.754 Removing: /var/run/dpdk/spdk_pid494982 00:30:06.754 Removing: /var/run/dpdk/spdk_pid495880 00:30:06.754 Removing: /var/run/dpdk/spdk_pid501491 00:30:06.754 Removing: /var/run/dpdk/spdk_pid501757 00:30:06.754 Removing: /var/run/dpdk/spdk_pid508850 00:30:06.754 Removing: /var/run/dpdk/spdk_pid509221 00:30:06.754 Removing: /var/run/dpdk/spdk_pid511727 00:30:06.754 Removing: /var/run/dpdk/spdk_pid519155 00:30:06.754 Removing: /var/run/dpdk/spdk_pid519160 00:30:06.754 Removing: /var/run/dpdk/spdk_pid525019 00:30:06.754 Removing: /var/run/dpdk/spdk_pid527471 00:30:06.754 Removing: /var/run/dpdk/spdk_pid529732 00:30:06.754 Removing: /var/run/dpdk/spdk_pid531235 00:30:06.754 Removing: /var/run/dpdk/spdk_pid533732 00:30:06.754 Removing: /var/run/dpdk/spdk_pid534994 00:30:06.754 Removing: /var/run/dpdk/spdk_pid544984 00:30:06.754 Removing: /var/run/dpdk/spdk_pid545637 00:30:06.754 Removing: /var/run/dpdk/spdk_pid546335 00:30:06.754 Removing: /var/run/dpdk/spdk_pid549732 00:30:06.754 Removing: /var/run/dpdk/spdk_pid550400 00:30:06.754 Removing: /var/run/dpdk/spdk_pid550926 00:30:06.754 Removing: /var/run/dpdk/spdk_pid555426 00:30:06.754 Removing: /var/run/dpdk/spdk_pid555622 00:30:06.754 Removing: /var/run/dpdk/spdk_pid557285 00:30:06.754 Clean 00:30:06.754 11:16:03 -- common/autotest_common.sh@1447 -- # return 0 00:30:06.754 11:16:03 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:30:06.754 11:16:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.754 11:16:03 -- common/autotest_common.sh@10 -- # set +x 00:30:06.754 11:16:03 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:30:06.754 11:16:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.754 11:16:03 -- common/autotest_common.sh@10 -- # set +x 00:30:07.013 11:16:03 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:07.013 11:16:03 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:07.013 11:16:03 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:07.013 11:16:03 -- spdk/autotest.sh@389 -- # hash lcov 00:30:07.013 11:16:03 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:07.013 11:16:03 -- spdk/autotest.sh@391 -- # hostname 00:30:07.013 11:16:03 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:07.013 geninfo: WARNING: invalid characters removed from testname! 00:30:33.600 11:16:27 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:33.600 11:16:30 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:36.148 11:16:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:37.202 11:16:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:39.206 11:16:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:40.699 11:16:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:42.204 11:16:38 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:42.204 11:16:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.204 11:16:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:42.204 11:16:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.204 11:16:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.204 11:16:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.204 11:16:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.204 11:16:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.204 11:16:38 -- paths/export.sh@5 -- $ export PATH 00:30:42.204 11:16:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.204 11:16:38 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:42.204 11:16:38 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:42.204 11:16:38 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715764598.XXXXXX 00:30:42.204 11:16:38 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715764598.bl9c1i 00:30:42.204 11:16:38 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:42.204 11:16:38 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:30:42.204 11:16:38 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:42.204 11:16:38 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:42.204 11:16:38 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:42.204 11:16:38 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:42.204 11:16:38 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:30:42.204 11:16:38 -- common/autotest_common.sh@10 -- $ set +x 00:30:42.204 11:16:38 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:42.204 11:16:38 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:42.204 11:16:38 -- pm/common@17 -- $ local monitor 00:30:42.204 11:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:42.204 11:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:42.204 11:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:42.204 11:16:38 -- pm/common@21 -- $ date +%s 00:30:42.204 11:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:42.204 11:16:38 -- pm/common@21 -- $ date +%s 00:30:42.204 11:16:38 -- pm/common@25 -- $ sleep 1 00:30:42.204 11:16:38 -- pm/common@21 -- $ date +%s 00:30:42.204 11:16:38 -- pm/common@21 -- $ date +%s 00:30:42.204 11:16:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764598 00:30:42.204 11:16:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764598 00:30:42.204 11:16:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764598 00:30:42.204 11:16:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764598 00:30:42.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764598_collect-vmstat.pm.log 00:30:42.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764598_collect-cpu-load.pm.log 00:30:42.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764598_collect-cpu-temp.pm.log 00:30:42.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764598_collect-bmc-pm.bmc.pm.log 00:30:43.148 11:16:39 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:43.148 11:16:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:43.148 11:16:39 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:43.148 11:16:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:43.148 11:16:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:43.148 11:16:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:43.148 11:16:39 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:43.148 11:16:39 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:43.148 11:16:39 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:43.148 11:16:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:43.148 11:16:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:43.148 11:16:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:43.148 11:16:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:43.148 11:16:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.148 11:16:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:43.148 11:16:39 -- pm/common@44 -- $ pid=568952 00:30:43.148 11:16:39 -- pm/common@50 -- $ kill -TERM 568952 00:30:43.148 11:16:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.148 11:16:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:43.148 11:16:39 -- pm/common@44 -- $ pid=568953 00:30:43.148 11:16:39 -- pm/common@50 -- $ kill -TERM 568953 00:30:43.148 11:16:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.148 11:16:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:43.148 11:16:39 -- pm/common@44 -- $ pid=568956 00:30:43.148 11:16:39 -- pm/common@50 -- $ kill -TERM 568956 00:30:43.148 11:16:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.148 11:16:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:43.148 11:16:39 -- pm/common@44 -- $ pid=568986 00:30:43.148 11:16:39 -- pm/common@50 -- $ sudo -E kill -TERM 568986 00:30:43.148 + [[ -n 7297 ]] 00:30:43.148 + sudo kill 7297 00:30:43.159 [Pipeline] } 00:30:43.179 [Pipeline] // stage 00:30:43.184 [Pipeline] } 00:30:43.200 [Pipeline] // timeout 00:30:43.205 [Pipeline] } 00:30:43.223 [Pipeline] // catchError 00:30:43.228 [Pipeline] } 00:30:43.246 [Pipeline] // wrap 00:30:43.253 [Pipeline] } 00:30:43.270 [Pipeline] // catchError 00:30:43.280 [Pipeline] stage 00:30:43.282 [Pipeline] { (Epilogue) 00:30:43.296 [Pipeline] catchError 00:30:43.298 [Pipeline] { 00:30:43.312 [Pipeline] echo 00:30:43.314 Cleanup processes 00:30:43.320 [Pipeline] sh 00:30:43.613 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:43.613 569100 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:43.613 569642 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:43.626 [Pipeline] sh 00:30:43.914 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:43.915 ++ grep -v 'sudo pgrep' 00:30:43.915 ++ awk '{print $1}' 00:30:43.915 + sudo kill -9 569100 00:30:43.928 [Pipeline] sh 00:30:44.222 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:54.228 [Pipeline] sh 00:30:54.516 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:54.516 Artifacts sizes are good 00:30:54.533 [Pipeline] archiveArtifacts 00:30:54.540 Archiving artifacts 00:30:55.198 [Pipeline] sh 00:30:55.484 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:55.499 [Pipeline] cleanWs 00:30:55.509 [WS-CLEANUP] Deleting project workspace... 00:30:55.509 [WS-CLEANUP] Deferred wipeout is used... 00:30:55.516 [WS-CLEANUP] done 00:30:55.518 [Pipeline] } 00:30:55.540 [Pipeline] // catchError 00:30:55.553 [Pipeline] sh 00:30:55.840 + logger -p user.info -t JENKINS-CI 00:30:55.849 [Pipeline] } 00:30:55.865 [Pipeline] // stage 00:30:55.870 [Pipeline] } 00:30:55.887 [Pipeline] // node 00:30:55.893 [Pipeline] End of Pipeline 00:30:55.922 Finished: SUCCESS